共查询到19条相似文献,搜索用时 62 毫秒
1.
不平衡情感分类中的特征选择方法研究 总被引:1,自引:0,他引:1
随着网络的发展,情感分类任务受到广大研究人员的密切关注。针对情感分类中的不平衡数据分布和高维特征问题,该文比较研究了四种经典的特征选择方法在不平衡情感分类中的应用。同时,该文提出了三种不同的特征选择模式并实验比较了这三种模式在分类和降维性能方面的表现。实验结果表明在不平衡数据的情感分类任务中,特征选择方法能够在不损失分类效果的前提下显著降低特征向量的维度。此外,特征选择方法中信息增益(IG)结合“先随机欠采样后特征选择”模式能够取得最佳的分类效果。 相似文献
2.
特征选择是维吾尔语文本分类的关键技术,对分类结果将产生直接的影响。为了提高传统信息增益在维吾尔文特征选择中的效果,在深度分析维吾尔文语种特点的基础上,提出了一种新的信息增益特征选择方法。该方法结合类词频和特征分布系数以及倒逆文档频率,对传统信息增益进行修正;引入一个备选特征分布系数来平衡类间选取的特征个数;在维吾尔文数据集上实验验证。实验结果表明,改进的算法对维吾尔文分类效果有明显的提高。 相似文献
3.
4.
5.
传统的特征选择方法在面对不平衡文本情感倾向性分类时会有很大的局限性,这种局限性主要体现在特征维数过高、特征过于稀疏和特征分布不平衡,这会使得分类的准确度大幅度下降。根据不平衡文本情感特征分布的特点,结合三支决策的思想,提出了一种面向不平衡文本情感分类的三支决策特征选择方法(TWD-FS)。该方法将两种有监督特征选择方法相结合,将选择出的特征词进一步筛选,使得最终选择出的特征词同时满足类间离散度最大和类内离散度最小的特点,有效地减少了特征词的数量,降低了特征维度;此外,通过组合正负类情感特征,缓解了情感特征的不平衡性,有效提高了不平衡样本中少数类情感的分类效果。在COAE2013中文微博非平衡数据集等多个数据集上的实验结果表明,所提的特征选择算法TWD-FS可以有效提高不平衡文本情感分类的准确度。 相似文献
6.
随着计算机技术和WWW的飞速发展,文本分类已经成为信息检索的关键技术之一,而特征选择对分类效果起着至关重要的作用.对文本分类的4种常用特征选择方法进行了介绍和分析,提出了一种基于类内频率的特征选择方法.选用kNN法和支持向量机作为分类器,利用以上5种文本特征选择方法在平衡语料和非平衡语料上进行了测试.实验结果表明,该方法能够有效选出真正对分类有意义的特征,分类效果较好,尤其适合支持向量机分类器. 相似文献
7.
在不平衡数据的分类中,标准分类器为优化整体的分类误差会牺牲少数类的分类准确率,而实际应用中通常更重视对少数类的准确识别。数据层面方法因其有独立于分类器、泛化能力较强、实现简单等优势,成为解决不平衡数据分类问题的有效策略。围绕不平衡数据分类的数据层面方法开展综述研究,分析造成不平衡数据分类问题的影响因素,从样本空间优化、特征空间优化两个方向对重采样方法及特征选择方法的相关研究进行梳理和评述,并对两类方法进行横向比较。最后提出了需要重点关注的问题和可能的研究机会,以期为不平衡数据分类算法研究及应用提供借鉴和参考。 相似文献
8.
9.
10.
文本分类中特征空间的高维问题是文本分类的主要障碍之一。特征选择(Feature Selection)是一种有效的特征降维方法。现有的特征选择函数主要有文档频率(DF),信息增益(IG),互信息(MI)等。基于特征的基本约束条件以及高性能特征选择方法的设计步骤,提出了一种改进的特征选择方法SIG。该特征选择方法在保证分类效果的同时,提高了对中低频特征的偏向。在语料集Reuters-21578上的实验证明,该方法能够获得较好的分类效果,同时有效提高了对具有强分类能力的中低频特征的利用。 相似文献
11.
12.
In supervised classification, we often encounter many real world problems in which the data do not have an equitable distribution among the different classes of the problem. In such cases, we are dealing with the so-called imbalanced data sets. One of the most used techniques to deal with this problem consists of preprocessing the data previously to the learning process. This paper proposes a method belonging to the family of the nested generalized exemplar that accomplishes learning by storing objects in Euclidean n-space. Classification of new data is performed by computing their distance to the nearest generalized exemplar. The method is optimized by the selection of the most suitable generalized exemplars based on evolutionary algorithms. An experimental analysis is carried out over a wide range of highly imbalanced data sets and uses the statistical tests suggested in the specialized literature. The results obtained show that our evolutionary proposal outperforms other classic and recent models in accuracy and requires to store a lower number of generalized examples. 相似文献
13.
14.
15.
针对网络中存在的对等网络(P2P)流量泛滥导致的流量失衡问题,提出将非平衡数据分类思想应用于流量识别过程。通过引入合成少数类过采样技术(SMOTE)算法并进行改进,提出了均值SMOTE (M-SMOTE)算法,实现对流量数据的平衡化处理。在此基础上分别采用3种机器学习分类器:随机森林(RF)、支持向量机(SVM)、反向传播神经网络(BPNN)对处理后各类流量进行识别。理论分析与仿真结果表明,在不影响P2P流量识别准确率的前提下,与非平衡状态相比,引入SMOTE算法将非P2P流量的识别准确率平均提高了16.5个百分点,将网络流量的整体识别率提高了9.5个百分点;与SMOTE算法相比,M-SMOTE算法将非P2P流量的识别准确率与网络流量的整体识别率分别进一步提高了3.2个百分点和2.6个百分点。实验结果表明,非平衡数据分类思想可有效解决P2P流量过多导致的非P2P流量识别率低的问题,同时所提M-SMOTE算法具有更高的识别准确度。 相似文献
16.
17.
Genetic algorithms (GAs) have been used as conventional methods for classifiers to adaptively evolve solutions for classification problems. Feature selection plays an important role in finding relevant features in classification. In this paper, feature selection is explored with modular GA-based classification. A new feature selection technique, relative importance factor (RIF), is proposed to find less relevant features in the input domain of each class module. By removing these features, it is aimed to reduce the classification error and dimensionality of classification problems. Benchmark classification data sets are used to evaluate the proposed approach. The experiment results show that RIF can be used to find less relevant features and help achieve lower classification error with the feature space dimension reduced. 相似文献
18.
Feature selection for multi-label naive Bayes classification 总被引:4,自引:0,他引:4
In multi-label learning, the training set is made up of instances each associated with a set of labels, and the task is to predict the label sets of unseen instances. In this paper, this learning problem is addressed by using a method called Mlnb which adapts the traditional naive Bayes classifiers to deal with multi-label instances. Feature selection mechanisms are incorporated into Mlnb to improve its performance. Firstly, feature extraction techniques based on principal component analysis are applied to remove irrelevant and redundant features. After that, feature subset selection techniques based on genetic algorithms are used to choose the most appropriate subset of features for prediction. Experiments on synthetic and real-world data show that Mlnb achieves comparable performance to other well-established multi-label learning algorithms. 相似文献
19.
In the class imbalanced learning scenario, traditional machine learning algorithms focusing on optimizing the overall accuracy tend to achieve poor classification performance especially for the minority class in which we are most interested. To solve this problem, many effective approaches have been proposed. Among them, the bagging ensemble methods with integration of the under-sampling techniques have demonstrated better performance than some other ones including the bagging ensemble methods integrated with the over-sampling techniques, the cost-sensitive methods, etc. Although these under-sampling techniques promote the diversity among the generated base classifiers with the help of random partition or sampling for the majority class, they do not take any measure to ensure the individual classification performance, consequently affecting the achievability of better ensemble performance. On the other hand, evolutionary under-sampling EUS as a novel undersampling technique has been successfully applied in searching for the best majority class subset for training a good-performance nearest neighbor classifier. Inspired by EUS, in this paper, we try to introduce it into the under-sampling bagging framework and propose an EUS based bagging ensemble method EUS-Bag by designing a new fitness function considering three factors to make EUS better suited to the framework. With our fitness function, EUS-Bag could generate a set of accurate and diverse base classifiers. To verify the effectiveness of EUS-Bag, we conduct a series of comparison experiments on 22 two-class imbalanced classification problems. Experimental results measured using recall, geometric mean and AUC all demonstrate its superior performance. 相似文献