首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
针对文本分类中传统特征选择方法卡方统计量和信息增益的不足进行了分析,得出文本分类中的特征选择关键在于选择出集中分布于某类文档并在该类文档中均匀分布且频繁出现的特征词。因此,综合考虑特征词的文档频、词频以及特征词的类间集中度、类内分散度,提出一种基于类内类间文档频和词频统计的特征选择评估函数,并利用该特征选择评估函数在训练集每个类别中选取一定比例的特征词组成该类别的特征词库,而训练集的特征词库则为各类别特征词库的并集。通过基于SVM的中文文本分类实验表明,该方法与传统的卡方统计量和信息增益相比,在一定程度上提高了文本分类的效果。  相似文献   

2.
In this paper. we present the MIFS-C variant of the mutual information feature-selection algorithms. We present an algorithm to find the optimal value of the redundancy parameter, which is a key parameter in the MIFS-type algorithms. Furthermore, we present an algorithm that speeds up the execution time of all the MIFS variants. Overall, the presented MIFS-C has comparable classification accuracy (in some cases even better) compared with other MIFS algorithms, while its running time is faster. We compared this feature selector with other feature selectors, and found that it performs better in most cases. The MIFS-C performed especially well for the breakeven and F-measure because the algorithm can be tuned to optimise these evaluation measures. Jan Bakus received the B.A.Sc. and M.A.Sc. degrees in electrical engineering from the University of Waterloo, Waterloo, ON, Canada, in 1996 and 1998, respectively, and Ph.D. degree in systems design engineering in 2005. He is currently working at Maplesoft, Waterloo, ON, Canada as an applications engineer, where he is responsible for the development of application specific toolboxes for the Maple scientific computing software. His research interests are in the area of feature selection for text classification, text classification, text clustering, and information retrieval. He is the recipient of the Carl Pollock Fellowship award from the University of Waterloo and the Datatel Scholars Foundation scholarship from Datatel. Mohamed S. Kamel holds a Ph.D. in computer science from the University of Toronto, Canada. He is at present Professor and Director of the Pattern Analysis and Machine Intelligence Laboratory in the Department of Electrical and Computing Engineering, University of Waterloo, Canada. Professor Kamel holds a Canada Research Chair in Cooperative Intelligent Systems. Dr. Kamel's research interests are in machine intelligence, neural networks and pattern recognition with applications in robotics and manufacturing. He has authored and coauthored over 200 papers in journals and conference proceedings, 2 patents and numerous technical and industrial project reports. Under his supervision, 53 Ph.D. and M.A.Sc. students have completed their degrees. Dr. Kamel is a member of ACM, AAAI, CIPS and APEO and has been named s Fellow of IEEE (2005). He is the editor-in-chief of the International Journal of Robotics and Automation, Associate Editor of the IEEE SMC, Part A, the International Journal of Image and Graphics, Pattern Recognition Letters and is a member of the editorial board of the Intelligent Automation and Soft Computing. He has served as a consultant to many Companies, including NCR, IBM, Nortel, VRP and CSA. He is a member of the board of directors and cofounder of Virtek Vision International in Waterloo.  相似文献   

3.
Feature selection for text categorization is a well-studied problem and its goal is to improve the effectiveness of categorization, or the efficiency of computation, or both. The system of text categorization based on traditional term-matching is used to represent the vector space model as a document; however, it needs a high dimensional space to represent the document, and does not take into account the semantic relationship between terms, which leads to a poor categorization accuracy. The latent semantic indexing method can overcome this problem by using statistically derived conceptual indices to replace the individual terms. With the purpose of improving the accuracy and efficiency of categorization, in this paper we propose a two-stage feature selection method. Firstly, we apply a novel feature selection method to reduce the dimension of terms; and then we construct a new semantic space, between terms, based on the latent semantic indexing method. Through some applications involving the spam database categorization, we find that our two-stage feature selection method performs better.  相似文献   

4.
为了有效消除声发射信号中的噪声,将广义S变换滤波方法应用于声发射信号去噪,分别采用广义S变换中的充零法、基于带通滤波器设计滤波算子法以及时频滤波法进行滤波比较,针对信号的不同时频特性设计了相应的时频滤波算子。结果表明,基于S变换的三种时频滤波法对声发射信号的去噪均有较好的效果,克服了传统滤波方法滤波因子不能随时间、频率变化而变化的缺陷。其中时频滤波法在高信噪比和低信噪比情况下都能更好地去除噪声,可以满足信号处理的要求。  相似文献   

5.
In this paper, we propose a filtering method for feature selection called ALOFT (At Least One FeaTure). The proposed method focuses on specific characteristics of text categorization domain. Also, it ensures that every document in the training set is represented by at least one feature and the number of selected features is determined in a data-driven way. We compare the effectiveness of the proposed method with the Variable Ranking method using three text categorization benchmarks (Reuters-21578, 20 Newsgroup and WebKB), two different classifiers (k-Nearest Neighbor and Naïve Bayes) and five feature evaluation functions. The experiments show that ALOFT obtains equivalent or better results than the classical Variable Ranking.  相似文献   

6.
一种改进的文本分类特征选择方法   总被引:1,自引:0,他引:1       下载免费PDF全文
文本分类中特征空间的高维问题是文本分类的主要障碍之一。特征选择(Feature Selection)是一种有效的特征降维方法。现有的特征选择函数主要有文档频率(DF),信息增益(IG),互信息(MI)等。基于特征的基本约束条件以及高性能特征选择方法的设计步骤,提出了一种改进的特征选择方法SIG。该特征选择方法在保证分类效果的同时,提高了对中低频特征的偏向。在语料集Reuters-21578上的实验证明,该方法能够获得较好的分类效果,同时有效提高了对具有强分类能力的中低频特征的利用。  相似文献   

7.
Feature selection is known as a good solution to the high dimensionality of the feature space and mostly preferred feature selection methods for text classification are filter-based ones. In a common filter-based feature selection scheme, unique scores are assigned to features depending on their discriminative power and these features are sorted in descending order according to the scores. Then, the last step is to add top-N features to the feature set where N is generally an empirically determined number. In this paper, an improved global feature selection scheme (IGFSS) where the last step in a common feature selection scheme is modified in order to obtain a more representative feature set is proposed. Although feature set constructed by a common feature selection scheme successfully represents some of the classes, a number of classes may not be even represented. Consequently, IGFSS aims to improve the classification performance of global feature selection methods by creating a feature set representing all classes almost equally. For this purpose, a local feature selection method is used in IGFSS to label features according to their discriminative power on classes and these labels are used while producing the feature sets. Experimental results on well-known benchmark datasets with various classifiers indicate that IGFSS improves the performance of classification in terms of two widely-known metrics namely Micro-F1 and Macro-F1.  相似文献   

8.
文本分类中一种新的特征选择方法   总被引:11,自引:0,他引:11  
在自动文本分类系统中,特征选择是有效降低文本向量维数的一种方法。在分析了常用的一些特征选择的评价函数的基础上,提出了一个新的评价函数,即互信息比值。实验证明这一方法简单可行,有助于提高所选特征子集的有效性。  相似文献   

9.
用于文本分类和文本聚类的特征抽取方法的研究   总被引:2,自引:0,他引:2  
文本信息处理已成为一门日趋成熟、应用面日趋广泛的学科.文本分类和聚类技术是应信息检索和查询需要而出现的自然语言处理领域的重要研究课题.面对急速膨胀的各种文本信息,通过使用文本分类和聚类技术,人们能对这些信息进行高效地组织和整理,以便于实现信息的准确定位和分流,从而提高用户查询和检索的效率.本文针对文本信息处理中最重要的研究方向--文本分类和聚类技术展开了研究,分析了特征抽取法在文本分类和文本聚类中应用的重要性,以及论证了为何要对文本进行特征抽取,最后分别阐述了用于文本分类和文本聚类的特征抽取方法.  相似文献   

10.
11.
Searching for an optimal feature subset from a high-dimensional feature space is an NP-complete problem; hence, traditional optimization algorithms are inefficient when solving large-scale feature selection problems. Therefore, meta-heuristic algorithms are extensively adopted to solve such problems efficiently. This study proposes a regression-based particle swarm optimization for feature selection problem. The proposed algorithm can increase population diversity and avoid local optimal trapping by improving the jump ability of flying particles. The data sets collected from UCI machine learning databases are used to evaluate the effectiveness of the proposed approach. Classification accuracy is used as a criterion to evaluate classifier performance. Results show that our proposed approach outperforms both genetic algorithms and sequential search algorithms.  相似文献   

12.
This paper proposed a novel feature selection method that includes a self-representation loss function, a graph regularization term and an \({l_{2,1}}\)-norm regularization term. Different from traditional least square loss function which focuses on achieving the minimal regression error between the class labels and their corresponding predictions, the proposed self-representation loss function pushes to represent each feature with a linear combination of its relevant features, aim at effectively selecting representative features and ensuring the robustness to outliers. The graph regularization terms include two kinds of inherent information, i.e., the relationship between samples (the sample–sample relation for short) and the relationship between features (the feature–feature relation for short). The feature–feature relation reflects the similarity between two features and preserves the relation into the coefficient matrix, while the sample–sample relation reflects the similarity between two samples and preserves the relation into the coefficient matrix. The \({l_{2,1}}\)-norm regularization term is used to conduct feature selection, aim at selecting the features, which satisfies the characteristics mentioned above. Furthermore, we put forward a new optimization method to solve our objective function. Finally, we feed reduced data into support vector machine (SVM) to conduct classification on real datasets. The experimental results showed that the proposed method has a better performance comparing with state-of-the-art methods, such as k nearest neighbor, ridge regression, SVM and so on.  相似文献   

13.
The Journal of Supercomputing - Due to extensive web applications, sentiment classification (SC) has become a relevant issue of interest among text mining experts. The extensive online reviews...  相似文献   

14.
改进的χ~2统计文本特征选择方法   总被引:5,自引:1,他引:4       下载免费PDF全文
特征选择是当今研究领域的一个热点,尤其是文本分类领域中的热点。针对χ2统计方法的两个缺陷:降低了低频词的权重和提高了很少在指定类中出现但普遍存在于其他类的特征在该类中的权重,对χ2统计方法进行改进,并通过做模拟和对比实验,对比改进前后的方法对文本分类的影响。在模拟和对比实验中,改进后方法的分类效果要好于传统的方法。  相似文献   

15.
A formal study of feature selection in text categorization   总被引:3,自引:0,他引:3  
One of the most important issues in Text Categorization (TC) is Feature Selection (FS). Many FS methods have been put forward and widely used in TC field, such as Information Gain (IG), Document Frequency thresholding (DF) and Mutual Information. Empirical studies show that some of these (e.g. IG, DF) produce better categorization performance than others (e.g. MI). A basic research question is why these FS methods cause different performance. Many existing works seek to answer this question based on empirical studies. In this paper, we present a formal study of FS in TC. We first define three desirable constraints that any reasonable FS function should satisfy, then check these constraints on some popular FS methods, including IG, DF, MI and two other methods. We find that IG satisfies the first two constraints, and that there are strong statistical correlations between DF and the first constraint, whilst MI does not satisfy any of the constraints. Experimental results indicate that the empirical performance of a FS function is tightly related to how well it satisfies these constraints and none of the investigated FS functions can satisfy all the three constraints at the same time. Finally we present a novel framework for developing FS functions which satisfy all the three constraints, and design several new FS functions using this framework. Experimental results on Reuters21578 and Newsgroup corpora show that our new FS function DFICF outperforms IG and DF when using either Micro- or Macro-averaged-measures.  相似文献   

16.
Feature selection is an important preprocessing step for building efficient, generalizable and interpretable classifiers on high dimensional data sets. Given the assumption on the sufficient labelled samples, the Markov Blanket provides a complete and sound solution to the selection of optimal features, by exploring the conditional independence relationships among the features. In real-world applications, unfortunately, it is usually easy to get unlabelled samples, but expensive to obtain the corresponding accurate labels on the samples. This leads to the potential waste of valuable classification information buried in unlabelled samples.In this paper, we propose a new BAyesian Semi-SUpervised Method, or BASSUM in short, to exploit the values of unlabelled samples on classification feature selection problem. Generally speaking, the inclusion of unlabelled samples helps the feature selection algorithm on (1) pinpointing more specific conditional independence tests involving fewer variable features and (2) improving the robustness of individual conditional independence tests with additional statistical information. Our experimental results show that BASSUM enhances the efficiency of traditional feature selection methods and overcomes the difficulties on redundant features in existing semi-supervised solutions.  相似文献   

17.
Most popular feature selection methods for text classification such as information gain (also known as “mutual information”), chi-square, and odds ratio, are based on binary information indicating the presence/absence of the feature (or “term”) in each training document. As such, these methods do not exploit a rich source of information, namely, the information concerning how frequently the feature occurs in the training document (term frequency). In order to overcome this drawback, when doing feature selection we logically break down each training document of length k into k training “micro-documents”, each consisting of a single word occurrence and endowed with the same class information of the original training document. This move has the double effect of (a) allowing all the original feature selection methods based on binary information to be still straightforwardly applicable, and (b) making them sensitive to term frequency information. We study the impact of this strategy in the case of ordinal text classification, a type of text classification dealing with classes lying on an ordinal scale, and recently made popular by applications in customer relationship management, market research, and Web 2.0 mining. We run experiments using four recently introduced feature selection functions, two learning methods of the support vector machines family, and two large datasets of product reviews. The experiments show that the use of this strategy substantially improves the accuracy of ordinal text classification.  相似文献   

18.
基于互信息最大化的特征选择算法及应用   总被引:1,自引:2,他引:1       下载免费PDF全文
该文以互信息最大化原则为指导,经过推导和分析后提出了一种基于信息论模型的新的特征选择算法,称之为基于互信息最大化的特征选择算法(MaxMI)。基本思想就是特征选择后,应当尽可能多地保留关于类别的信息。该算法与传统的信息增益、互信息和交叉熵在表达形式上具有一定的相似性,但是并不完全相同。从实验上验证了基于互信息最大化的特征选择算法优于其它三种算法。  相似文献   

19.
针对从文集全局角度评价特征重要性的传统特征选择方法可能忽略某些重要分类特征的问题,提出两步特征选择方法.该方法首先过滤掉类别关联性不强的特征;然后根据词的统计信息将词归为各个类别的区分词,找出每个类的分类特征的最优子集;最后,将各个类别的最优子集组合起来形成最终分类特征.实验采用朴素贝叶斯作为分类器,使用IG,ECE,CC,MI和CHI等5种特征选择公式对该方法与传统方法进行比较,得到分类性能宏平均指标对比分别为91.075%对86.971%,91.122%对86.992%,91.160%对87.470%,90.253%对86.061%,90.881%对87.006%.该方法在考虑分类特征信息的同时,尽量保留传统特征选择方法中好的特征,能更好地捕获分类信息.  相似文献   

20.
针对传统卡方统计量(CHI)方法在全局范围内进行特征选择时忽略词频信息问题,提出了一种改进的文本特征选择方法。通过引入特征分布相关性系数,选择局部出现的强相关性特征,并利用修正因子解决CHI方法的负相关困扰,从而提升语料集的分类指标。对网易新闻语料库和复旦大学中文语料库进行实验时,利用以上方法进行特征选择,使用改进后的词频—逆文本频率(TF-IDF)权重计算公式加权,分类器选择支持向量机(SVM)和朴素贝叶斯法。结果表明:改进的方法不仅在分类效果上有明显的提高,而且性能更加稳定。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号