首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 254 毫秒
1.
This work aims to connect two rarely combined research directions, i.e., non-stationary data stream classification and data analysis with skewed class distributions. We propose a novel framework employing stratified bagging for training base classifiers to integrate data preprocessing and dynamic ensemble selection methods for imbalanced data stream classification. The proposed approach has been evaluated based on computer experiments carried out on 135 artificially generated data streams with various imbalance ratios, label noise levels, and types of concept drift as well as on two selected real streams. Four preprocessing techniques and two dynamic selection methods, used on both bagging classifiers and base estimators levels, were considered. Experimentation results showed that, for highly imbalanced data streams, dynamic ensemble selection coupled with data preprocessing could outperform online and chunk-based state-of-art methods.  相似文献   

2.
动态非平衡数据分类是在线学习和类不平衡学习领域重要的研究问题,用于处理类分布非常倾斜的数据流。这类问题在实际场景中普遍存在,如实时控制监控系统的故障诊断和计算机网络中的入侵检测等。由于动态数据流中存在概念漂移现象和不平衡问题,因此数据流分类算法既要处理概念漂移,又要解决类不平衡问题。针对以上问题,提出了在检测概念漂移的同时对非平衡数据进行处理的一种方法。该方法采用Kappa系数检测概念漂移,进而检测平衡率,利用非平衡数据分类方法更新分类器。实验结果表明,在不同的评价指标上,该算法对非平衡数据流具有较好的分类性能。  相似文献   

3.
概念漂移是数据流学习领域中的一个难点问题,同时数据流中存在的类不平衡问题也会严重影响算法的分类性能。针对概念漂移和类不平衡的联合问题,在基于数据块集成的方法上引入在线更新机制,结合重采样和遗忘机制提出了一种增量加权集成的不平衡数据流分类方法(incremental weighted ensemble for imbalance learning, IWEIL)。该方法以集成框架为基础,利用基于可变大小窗口的遗忘机制确定基分类器对窗口内最近若干实例的分类性能,并计算基分类器的权重,随着新实例的逐个到达,在线更新IWEIL中每个基分器及其权重。同时,使用改进的自适应最近邻SMOTE方法生成符合新概念的新少数类实例以解决数据流中类不平衡问题。在人工数据集和真实数据集上进行实验,结果表明,相比于DWMIL算法,IWEIL在HyperPlane数据集上的G-mean和recall指标分别提升了5.77%和6.28%,在Electricity数据集上两个指标分别提升了3.25%和6.47%。最后,IWEIL在安卓应用检测问题上表现良好。  相似文献   

4.
复杂数据流中所存在的概念漂移及不平衡问题降低了分类器的性能。传统的批量学习算法需要考虑内存以及运行时间等因素,在快速到达的海量数据流中性能并不突出,并且其中还包含着大量的漂移及类失衡现象,利用在线集成算法处理复杂数据流问题已经成为数据挖掘领域重要的研究课题。从集成策略的角度对bagging、boosting、stacking集成方法的在线版本进行了介绍与总结,并对比了不同模型之间的性能。首次对复杂数据流的在线集成分类算法进行了详细的总结与分析,从主动检测和被动自适应两个方面对概念漂移数据流检测与分类算法进行了介绍,从数据预处理和代价敏感两个方面介绍不平衡数据流,并分析了代表性算法的时空效率,之后对使用相同数据集的算法性能进行了对比。最后,针对复杂数据流在线集成分类研究领域的挑战提出了下一步研究方向。  相似文献   

5.
The problem addressed in this study concerns mining data streams with concept drift. The goal of the article is to propose and validate a new approach to mining data streams with concept-drift using the ensemble classifier constructed from the one-class base classifiers. It is assumed that base classifiers of the proposed ensemble are induced from incoming chunks of the data stream. Each chunk consists of prototypes and information about whether the class prediction of these instances, carried-out at earlier steps, has been correct. Each data chunk can be updated by using the instance selection technique when new data arrive. When a new data chunk is formed, the ensemble model is also updated on the basis of weights assigned to each one-class classifier. In this article, two well-known instance-based learning algorithms—the CNN and the ENN—have been adopted to solve the one-class classification problems and, consequently, update the proposed classifier ensemble. The proposed approaches have been validated experimentally, and the computational experiment results are shown and discussed. The experiment results prove that the proposed approach using the ensemble classifier constructed from the one-class base classifiers with instance selection for chunk updating can outperform well-known approaches for data streams with concept drift.  相似文献   

6.
Identifying the temporal variations in mental workload level (MWL) is crucial for enhancing the safety of human–machine system operations, especially when there is cognitive overload or inattention of human operator. This paper proposed a cost-sensitive majority weighted minority oversampling strategy to address the imbalanced MWL data classification problem. Both the inter-class and intra-class imbalance problems are considered. For the former, imbalance ratio is defined to determine the number of the synthetic samples in the minority class. The latter problem is addressed by assigning different weights to borderline samples in the minority class based on the distance and density meaures of the sample distribution. Furthermore, multi-label classifier is designed based on an ensemble of binary classifiers. The results of analyzing 21 imbalanced UCI multi-class datasets showed that the proposed approach can effectively cope with the imbalanced classification problem in terms of several performance metrics including geometric mean (G-mean) and average accuracy (ACC). Moreover, the proposed approach was applied to the analysis of the EEG data of eight experimental participants subject to fluctuating levels of mental workload. The comparative results showed that the proposed method provides a competing alternative to several existing imbalanced learning algorithms and significantly outperforms the basic/referential method that ignores the imbalance nature of the dataset.  相似文献   

7.
Class imbalance limits the performance of most learning algorithms since they cannot cope with large differences between the number of samples in each class, resulting in a low predictive accuracy over the minority class. In this respect, several papers proposed algorithms aiming at achieving more balanced performance. However, balancing the recognition accuracies for each class very often harms the global accuracy. Indeed, in these cases the accuracy over the minority class increases while the accuracy over the majority one decreases. This paper proposes an approach to overcome this limitation: for each classification act, it chooses between the output of a classifier trained on the original skewed distribution and the output of a classifier trained according to a learning method addressing the course of imbalanced data. This choice is driven by a parameter whose value maximizes, on a validation set, two objective functions, i.e. the global accuracy and the accuracies for each class. A series of experiments on ten public datasets with different proportions between the majority and minority classes show that the proposed approach provides more balanced recognition accuracies than classifiers trained according to traditional learning methods for imbalanced data as well as larger global accuracy than classifiers trained on the original skewed distribution.  相似文献   

8.
In this paper, we present a new dynamic classifier design based on a set of one-class independent SVM for image data stream categorization. Dynamic or continuous learning and classification has been recently investigated to deal with different situations, like online learning of fixed concepts, learning in non-stationary environments (concept drift) or learning from imbalanced data. Most of solutions are not able to deal at the same time with many of these specificities. Particularly, adding new concepts, merging or splitting concepts are most of the time considered as less important and are consequently less studied, whereas they present a high interest for stream-based document image classification. To deal with that kind of data, we explore a learning and classification scheme based on one-class SVM classifiers that we call mOC-iSVM (multi-one-class incremental SVM). Even if one-class classifiers are suffering from a lack of discriminative power, they have, as a counterpart, a lot of interesting properties coming from their independent modeling. The experiments presented in the paper show the theoretical feasibility on different benchmarks considering addition of new classes. Experiments also demonstrate that the mOC-iSVM model can be efficiently used for tasks dedicated to documents classification (by image quality and image content) in a context of streams, handling many typical scenarii for concepts extension, drift, split and merge.  相似文献   

9.
齿轮是传动机械中的重要部件,也是在运行过程中产生故障的主要原因之一,因此对齿轮进行故障诊断研究就具有十分重要的意义。但是在齿轮故障诊断数据集中,故障样本数通常比非故障样本数要少很多,由此引发了数据不均衡问题下故障诊断的问题。以往的研究很少关注这种数据不均衡问题对故障诊断的影响。此外,在故障数据集中有一些冗余甚至是不相关的特征,这些特征降低了学习器的泛化能力。为解决这类问题,提出了一种基于Relief的EasyEnsemble算法来解决故障诊断中的数据不均衡问题。在UCI数据集和齿轮数据集上的实验结果表明新算法提高了分类器在不均衡数据集上的分类性能和预报能力。  相似文献   

10.
The present paper investigates the influence of both the imbalance ratio and the classifier on the performance of several resampling strategies to deal with imbalanced data sets. The study focuses on evaluating how learning is affected when different resampling algorithms transform the originally imbalanced data into artificially balanced class distributions. Experiments over 17 real data sets using eight different classifiers, four resampling algorithms and four performance evaluation measures show that over-sampling the minority class consistently outperforms under-sampling the majority class when data sets are strongly imbalanced, whereas there are not significant differences for databases with a low imbalance. Results also indicate that the classifier has a very poor influence on the effectiveness of the resampling strategies.  相似文献   

11.
Li  Qian  Yang  Bing  Li  Yi  Deng  Naiyang  Jing  Ling 《Neural computing & applications》2012,22(1):249-256

A novel method, namely ensemble support vector machine with segmentation (SeEn–SVM), for the classification of imbalanced datasets is proposed in this paper. In particular, vector quantization algorithm is used to segment the majority class and hence generates some small datasets that are of less imbalance than original one, and two different weighted functions are proposed to integrate all the results of basic classifiers. The goal of the SeEn–SVM algorithm is to improve the prediction accuracy of the minority class, which is more interesting for people. The SeEn–SVM is applied to six UCI datasets, and the results confirmed its better performance than previously proposed methods for imbalance problem.

  相似文献   

12.
针对多分类不均衡问题,提出了一种新的基于一对一(one-versus-one,OVO)分解策略的方法。首先基于OVO分解策略将多分类不均衡问题分解成多个二值分类问题;再利用处理不均衡二值分类问题的算法建立二值分类器;接着利用SMOTE过抽样技术处理原始数据集;然后采用基于距离相对竞争力加权方法处理冗余分类器;最后通过加权投票法获得输出结果。在KEEL不均衡数据集上的大量实验结果表明,所提算法比其他经典方法具有显著的优势。  相似文献   

13.
滚动轴承缺陷是导致滚动轴承在运行过程中产生故障的主要原因之一,因此对滚动轴承缺陷诊断技术进行研究具有十分重要的意义。但是,在轴承故障诊断数据集中,故障样本数通常比非故障样本数要少很多,由此引发了数据不均衡情况下故障诊断的问题。以往的研究很少关注这种数据不均衡问题对故障诊断的影响。此外,在故障数据集中有一些冗余甚至是不相关的特征,这些特征降低了学习器的泛化能力。为解决这类问题,本文提出了一种基于Fisher准则的EasyEnsemble算法来解决故障诊断中的数据不均衡问题。在UCI数据集和滚动轴承数据集上的实验结果表明,新算法提高了分类器在不均衡数据集上的分类性能和预报能力。  相似文献   

14.
文本分类中数据集的不均衡问题是一个在实际应用中普遍存在的问题。从特征选择优化和分类器性能提升两方面出发,提出了一种组合的不均衡数据集文本分类方法。在特征选择方面,综合考虑特征项与类别的正负相关特性及类别区分强度对传统CHI统计特征选择方法予以改进。在数据层上,采用数据重取样方法对不均衡训练语料的不平衡性过滤减少其对分类性能的影响。实验结果表明该方法对不均衡数据集上文本可达到较好分类效果。  相似文献   

15.
In this paper, we propose two risk-sensitive loss functions to solve the multi-category classification problems where the number of training samples is small and/or there is a high imbalance in the number of samples per class. Such problems are common in the bio-informatics/medical diagnosis areas. The most commonly used loss functions in the literature do not perform well in these problems as they minimize only the approximation error and neglect the estimation error due to imbalance in the training set. The proposed risk-sensitive loss functions minimize both the approximation and estimation error. We present an error analysis for the risk-sensitive loss functions along with other well known loss functions. Using a neural architecture, classifiers incorporating these risk-sensitive loss functions have been developed and their performance evaluated for two real world multi-class classification problems, viz., a satellite image classification problem and a micro-array gene expression based cancer classification problem. To study the effectiveness of the proposed loss functions, we have deliberately imbalanced the training samples in the satellite image problem and compared the performance of our neural classifiers with those developed using other well-known loss functions. The results indicate the superior performance of the neural classifier using the proposed loss functions both in terms of the overall and per class classification accuracy. Performance comparisons have also been carried out on a number of benchmark problems where the data is normal i.e., not sparse or imbalanced. Results indicate similar or better performance of the proposed loss functions compared to the well-known loss functions.  相似文献   

16.
Adapted One-versus-All Decision Trees for Data Stream Classification   总被引:1,自引:0,他引:1  
One versus all (OVA) decision trees learn k individual binary classifiers, each one to distinguish the instances of a single class from the instances of all other classes. Thus OVA is different from existing data stream classification schemes whose majority use multiclass classifiers, each one to discriminate among all the classes. This paper advocates some outstanding advantages of OVA for data stream classification. First, there is low error correlation and hence high diversity among OVA's component classifiers, which leads to high classification accuracy. Second, OVA is adept at accommodating new class labels that often appear in data streams. However, there also remain many challenges to deploy traditional OVA for classifying data streams. First, as every instance is fed to all component classifiers, OVA is known as an inefficient model. Second, OVA's classification accuracy is adversely affected by the imbalanced class distribution in data streams. This paper addresses those key challenges and consequently proposes a new OVA scheme that is adapted for data stream classification. Theoretical analysis and empirical evidence reveal that the adapted OVA can offer faster training, faster updating and higher classification accuracy than many existing popular data stream classification algorithms.  相似文献   

17.
In many applications of information systems learning algorithms have to act in dynamic environments where data are collected in the form of transient data streams. Compared to static data mining, processing streams imposes new computational requirements for algorithms to incrementally process incoming examples while using limited memory and time. Furthermore, due to the non-stationary characteristics of streaming data, prediction models are often also required to adapt to concept drifts. Out of several new proposed stream algorithms, ensembles play an important role, in particular for non-stationary environments. This paper surveys research on ensembles for data stream classification as well as regression tasks. Besides presenting a comprehensive spectrum of ensemble approaches for data streams, we also discuss advanced learning concepts such as imbalanced data streams, novelty detection, active and semi-supervised learning, complex data representations and structured outputs. The paper concludes with a discussion of open research problems and lines of future research.  相似文献   

18.
In an imbalanced dataset, the positive and negative classes can be quite different in both size and distribution. This degrades the performance of many feature extraction methods and classifiers. This paper proposes a method for extracting minimum positive and maximum negative features (in terms of absolute value) for imbalanced binary classification. This paper develops two models to yield the feature extractors. Model 1 first generates a set of candidate extractors that can minimize the positive features to be zero, and then chooses the ones among these candidates that can maximize the negative features. Model 2 first generates a set of candidate extractors that can maximize the negative features, and then chooses the ones that can minimize the positive features. Compared with the traditional feature extraction methods and classifiers, the proposed models are less likely affected by the imbalance of the dataset. Experimental results show that these models can perform well when the positive class and negative class are imbalanced in both size and distribution.  相似文献   

19.
现有分类算法对不平衡数据挖掘通常表现出有偏性,即正类样本(通常是更重要的一类)的分类和预测性能差于负类样本的分类和预测性能,为此提出一种不平衡数据的分类方法。该方法对不同类引入不同的惩罚参数来灵活控制两类错分率的上界,通过一个超球面将两类数据以最大分离比率分离,从而提高不平衡数据对正类分类和预测的性能。实验结果表明,该方法可以有效提高不平衡数据的分类性能。  相似文献   

20.
徐树良  王俊红 《计算机科学》2016,43(12):173-178
数据流挖掘已经成为数据挖掘领域一个热门的研究方向,由于数据流中概念漂移现象的存在,使得传统的分类算法无法直接应用于数据流中。为了能有效地应对数据流中的概念漂移,提出了一种基于Kappa系数的数据流分类算法。该算法采用集成式分类技术,以Kappa系数度量系统的分类性能,根据Kappa系数来动态地调整分类器,当发生概念漂移时,系统能利用已有的知识很快删除不符合要求的分类器来适应新概念。实验结果表明,相对于实验中参与比较的BWE,AE和AWE算法,该算法不但具有较好的分类性能,而且在一定程度上能较为有效地降低时间开销。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号