首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This work aims to connect two rarely combined research directions, i.e., non-stationary data stream classification and data analysis with skewed class distributions. We propose a novel framework employing stratified bagging for training base classifiers to integrate data preprocessing and dynamic ensemble selection methods for imbalanced data stream classification. The proposed approach has been evaluated based on computer experiments carried out on 135 artificially generated data streams with various imbalance ratios, label noise levels, and types of concept drift as well as on two selected real streams. Four preprocessing techniques and two dynamic selection methods, used on both bagging classifiers and base estimators levels, were considered. Experimentation results showed that, for highly imbalanced data streams, dynamic ensemble selection coupled with data preprocessing could outperform online and chunk-based state-of-art methods.  相似文献   

2.
Learning from imbalanced data occurs frequently in many machine learning applications. One positive example to thousands of negative instances is common in scientific applications. Unfortunately, traditional machine learning techniques often treat rare instances as noise. One popular approach for this difficulty is to resample the training data. However, this results in high false positive predictions. Hence, we propose preprocessing training data by partitioning them into clusters. This greatly reduces the imbalance between minority and majority instances in each cluster. For moderate imbalance ratio, our technique gives better prediction accuracy than other resampling method. For extreme imbalance ratio, this technique serves as a good filter that reduces the amount of imbalance so that traditional classification techniques can be deployed. More importantly, we have successfully applied our techniques to splice site prediction and protein subcellular localization problem, with significant improvements over previous predictors.  相似文献   

3.
不均衡数据下基于阴性免疫的过抽样新算法   总被引:2,自引:0,他引:2  
陶新民  徐晶 《控制与决策》2010,25(6):867-872
为提高不均衡数据集下算法分类性能,提出一种基于阴性免疫的过抽样算法.该算法利用阴性免疫实现少数类样本空间覆盖,以生成的检测器中心为人工生成的少数类样本.由于该算法利用的是多数类样本信息生成少数类样本,避免了人工少数类过抽样技术(SMOTE)生成的人工样本缺乏空间代表性的不足.通过实验将此算法与SMOTE算法及其改进算法进行比较,结果表明,该算法不仅有效提高了少数类样本的分类性能,而且总体分类性能也有了显著提高.  相似文献   

4.
Rule-based systems may sometimes grow very large, making their acceptance by users and their maintenance quite problematic. One therefore needs to make rule-bases as compact as possible. The classical definition of rule redundancy in the literature is based upon logic and graph theory. Another, complementary, view of redundancy is proposed here. The suggested approach is based on the contribution of individual rules to the overall system’s accuracy.

It is shown here, though an analysis of a real-world credit scoring rule-based system, that by taking into account system’s accuracy, one can sometimes significantly reduce the size of a rule-base; even one which is already free from logic-related abnormalities. The approach taken here is not proposed as a substitution to classical logic and graph-based methods. Rather, it complements them.  相似文献   


5.
Error back-propagation algorithm for classification of imbalanced data   总被引:1,自引:0,他引:1  
Classification of imbalanced data is pervasive but it is a difficult problem to solve. In order to improve the classification of imbalanced data, this letter proposes a new error function for the error back-propagation algorithm of multilayer perceptrons. The error function intensifies weight-updating for the minority class and weakens weight-updating for the majority class. We verify the effectiveness of the proposed method through simulations on mammography and thyroid data sets.  相似文献   

6.
With the advent of technology in various scientific fields, high dimensional data are becoming abundant. A general approach to tackle the resulting challenges is to reduce data dimensionality through feature selection. Traditional feature selection approaches concentrate on selecting relevant features and ignoring irrelevant or redundant ones. However, most of these approaches neglect feature interactions. On the other hand, some datasets have imbalanced classes, which may result in biases towards the majority class. The main goal of this paper is to propose a novel feature selection method based on the interaction information (II) to provide higher level interaction analysis and improve the search procedure in the feature space. In this regard, an evolutionary feature subset selection algorithm based on interaction information is proposed, which consists of three stages. At the first stage, candidate features and candidate feature pairs are identified using traditional feature weighting approaches such as symmetric uncertainty (SU) and bivariate interaction information. In the second phase, candidate feature subsets are formed and evaluated using multivariate interaction information. Finally, the best candidate feature subsets are selected using dominant/dominated relationships. The proposed algorithm is compared with some other feature selection algorithms including mRMR, WJMI, IWFS, IGFS, DCSF, IWFS, K_OFSD, WFLNS, Information Gain and ReliefF in terms of the number of selected features, classification accuracy, F-measure and algorithm stability using three different classifiers, namely KNN, NB, and CART. The results justify the improvement of classification accuracy and the robustness of the proposed method in comparison with the other approaches.  相似文献   

7.
Recently, the class imbalance problem has attracted much attention from researchers in the field of data mining. When learning from imbalanced data in which most examples are labeled as one class and only few belong to another class, traditional data mining approaches do not have a good ability to predict the crucial minority instances. Unfortunately, many real world data sets like health examination, inspection, credit fraud detection, spam identification and text mining all are faced with this situation. In this study, we present a novel model called the “Information Granulation Based Data Mining Approach” to tackle this problem. The proposed methodology, which imitates the human ability to process information, acquires knowledge from Information Granules rather then from numerical data. This method also introduces a Latent Semantic Indexing based feature extraction tool by using Singular Value Decomposition, to dramatically reduce the data dimensions. In addition, several data sets from the UCI Machine Learning Repository are employed to demonstrate the effectiveness of our method. Experimental results show that our method can significantly increase the ability of classifying imbalanced data.  相似文献   

8.
Uncertainty handling is one of the most important aspects of modelling of context-aware systems. It has direct impact on the adaptability, understood as an ability of the system to adjust to changing environmental conditions or hardware configuration (missing data), changing user habits (ambiguous concepts), or imperfect information (low quality sensors). In mobile context-aware systems, data is most often acquired from device’s hardware sensors (like GPS, accelerometer), virtual sensors (like activity recognition sensor provided by the Google API) or directly from the user. Uncertainty of such data is inevitable, and therefore it is obligatory to provide mechanisms for modelling and processing it. In this paper, we propose three complementary methods for dealing with most common uncertainty types present in mobile context-aware systems. We combine modified certainty factors algebra, probabilistic interpretation of rule-based model, and time-parametrised operators into a comprehensive toolkit for modelling and building robust mobile context-aware systems. Presented approach was implemented and evaluated on the practical use-case.  相似文献   

9.
This study investigates how to alleviate the class imbalance problems for constructing unbiased classifiers when instances in one class are more than that in another. Since keeping the data distribution unchanged and expanding class boundaries after synthetic samples have been added influence the classification performance greatly, we take into account the above two factors, and propose a Random Walk Over-Sampling approach (RWO-Sampling) to balancing different class samples by creating synthetic samples through randomly walking from the real data. When some conditions are satisfied, it can be proved that, both the expected average and the standard deviation of the generated samples equal to that of the original minority class data. RWO-Sampling also expands the minority class boundary after synthetic samples have been generated. In this work, we perform a broad experimental evaluation, and experimental results show that, RWO-Sampling statistically does much better than alternative methods on imbalanced data sets when implementing common baseline algorithms.  相似文献   

10.
This paper demonstrates that the imbalanced data sets have a negative effect on the performance of LDA theoretically. This theoretical analysis is confirmed by the experimental results: using several sampling methods to rebalance the imbalanced data sets, it is found that the performances of LDA on balanced data sets are superior to those of LDA on imbalanced data sets.  相似文献   

11.
Classification on medical data raises several problems such as class imbalance, double meaning of missing data, volumetry or need of highly interpretable results. In this paper a new algorithm is proposed: MOCA-I (Multi-Objective Classification Algorithm for Imbalanced data), a multi-objective local search algorithm that is conceived to deal with these issues all together. It is based on a new modelization as a Pittsburgh multi-objective partial classification rule mining problem, which is described in the first part of this paper. An existing dominance-based multi-objective local search (DMLS) is modified to deal with this modelization. After experimentally tuning the parameters of MOCA-I and determining which version of DMLS algorithm is the most effective, the obtained MOCA-I version is compared to several state-of-the-art classification algorithms. This comparison is realized on 10 small and middle-sized data sets of literature and 2 real data sets; MOCA-I obtains the best results on the 10 data sets and is statistically better than other approaches on the real data sets.  相似文献   

12.
The dynamic ensemble selection of classifiers is an effective approach for processing label-imbalanced data classifications. However, such a technique is prone to overfitting, owing to the lack of regularization methods and the dependence on local geometry of data. In this study, focusing on binary imbalanced data classification, a novel dynamic ensemble method, namely adaptive ensemble of classifiers with regularization (AER), is proposed, to overcome the stated limitations. The method solves the overfitting problem through a new perspective of implicit regularization. Specifically, it leverages the properties of stochastic gradient descent to obtain the solution with the minimum norm, thereby achieving regularization; furthermore, it interpolates the ensemble weights by exploiting the global geometry of data to further prevent overfitting. According to our theoretical proofs, the seemingly complicated AER paradigm, in addition to its regularization capabilities, can actually reduce the asymptotic time and memory complexities of several other algorithms. We evaluate the proposed AER method on seven benchmark imbalanced datasets from the UCI machine learning repository and one artificially generated GMM-based dataset with five variations. The results show that the proposed algorithm outperforms the major existing algorithms based on multiple metrics in most cases, and two hypothesis tests (McNemar’s and Wilcoxon tests) verify the statistical significance further. In addition, the proposed method has other preferred properties such as special advantages in dealing with highly imbalanced data, and it pioneers the researches on regularization for dynamic ensemble methods.  相似文献   

13.
A new support vector machine, SVM, is introduced, called GSVM, which is specially designed for bi-classification problems where balanced accuracy between classes is the objective. Starting from a standard SVM, the GSVM is obtained from a low-cost post-processing strategy by modifying the initial bias. Thus, the bias for GSVM is calculated by moving the original bias in the SVM to improve the geometric mean between the true positive rate and the true negative rate. The proposed solution neither modifies the original optimization problem for SVM training, nor introduces new hyper-parameters. Experimentation carried out on a high number of databases (23) shows GSVM obtaining the desired balanced accuracy between classes. Furthermore, its performance improves well-known cost-sensitive schemes for SVM, without adding complexity or computational cost.  相似文献   

14.
现实中许多领域产生的数据通常具有多个类别并且是不平衡的。在多类不平衡分类中,类重叠、噪声和多个少数类等问题降低了分类器的能力,而有效解决多类不平衡问题已经成为机器学习与数据挖掘领域中重要的研究课题。根据近年来的多类不平衡分类方法的文献,从数据预处理和算法级分类方法两方面进行了分析与总结,并从优缺点和数据集等方面对所有算法进行了详细的分析。在数据预处理方法中,介绍了过采样、欠采样、混合采样和特征选择方法,对使用相同数据集算法的性能进行了比较。从基分类器优化、集成学习和多类分解技术三个方面对算法级分类方法展开介绍和分析。最后对多类不平衡数据分类研究领域的未来发展方向进行总结归纳。  相似文献   

15.
With data in industrial processes being larger in scale and easier to access, data-driven technologies have become more prevalent in process monitoring. Fault classification is an indispensable part of process monitoring, while machine learning is an effective tool for fault classification. In most practical cases, however, the number of fault data is far smaller than normal data, and this imbalance of dataset would lead to the significant decline in performance of common classifier learning algorithms. To this issue, we propose a data augmentation method, which is based on Generative Adversarial Networks(GAN) and aided by Gaussian Discriminant Analysis(GDA), for enhancement of fault classification accuracy. To validate the effectiveness of this method for imbalanced fault classification, on toy data and the Tennessee Eastman (TE) benchmark process, common oversampling method and the basic GAN are compared to our method, with different classification algorithms. Besides, proposed method is deployed and parallelly trained on Tensorflow platform, which is suitable for applications like data augmentation and imbalanced fault classification in industrial big data environments.  相似文献   

16.
胡小生  张润晶  钟勇 《计算机科学》2013,40(11):271-275
类别不平衡数据分类是机器学习和数据挖掘研究的热点问题。传统分类算法有很大的偏向性,少数类分类效果不够理想。提出一种两层聚类的类别不平衡数据级联挖掘算法。算法首先进行基于聚类的欠采样,在多数类样本上进行聚类,之后提取聚类质心,获得与少数类样本数目相一致的聚类质心,再与所有少数类样例一起组成新的平衡训练集,为了避免少数类样本数量过少而使训练集过小导致分类精度下降的问题,使用SMOTE过采样结合聚类欠采样;然后在平衡的训练集上使用K均值聚类与C4.5决策树算法相级联的分类方法,通过K均值聚类将训练样例划分为K个簇,在每个聚类簇内使用C4.5算法构建决策树,通过K个聚簇上的决策树来改进优化分类决策边界。实验结果表明,该算法具有处理类别不平衡数据分类问题的优势。  相似文献   

17.
The healthy operations of mechanical systems are crucially important for ensuring human safety and economic benefits, so that there is a high demand on the automatic fault diagnosis techniques. However, the number of available faulty samples of mechanical systems is often far less than healthy samples, and thereby the traditional data-driven methods often suffer a high rate of misdiagnosis. In this paper, a new fault diagnosis method is developed on the basis of wavelet packet distortion and convolutional neural networks. First, wavelet packet distortion means that wavelet packet coefficients are distorted to augment fault samples, in order to achieve the equilibrium between healthy and faulty classes. Second, a convolutional neural network-based classification model is trained using the balanced training dataset. Third, the trained model is applied to classify the testing samples. Finally, the efficacy of this developed method in imbalanced fault diagnosis of mechanical systems is demonstrated through a number of experiments.  相似文献   

18.
19.
Imbalanced data classification, an important type of classification task, is challenging for standard learning algorithms. There are different strategies to handle the problem, as popular imbalanced learning technologies, data level imbalanced learning methods have elicited ample attention from researchers in recent years. However, most data level approaches linearly generate new instances by using local neighbor information rather than based on overall data distribution. Differing from these algorithms, in this study, we develop a new data level method, namely, generative learning (GL), to deal with imbalanced problems. In GL, we fit the distribution of the original data and generate new data on the basis of the distribution by adopting the Gaussian mixed model. Generated data, including synthetic minority and majority classes, are used to train learning models. The proposed method is validated through experiments performed on real-world data sets. Results show that our approach is competitive and comparable with other methods, such as SMOTE, SMOTE-ENN, SMOTE-TomekLinks, Borderline-SMOTE, and safe-level-SMOTE. Wilcoxon signed rank test is applied, and the testing results show again the significant superiority of our proposal.  相似文献   

20.
Class imbalance is among the most persistent complications which may confront the traditional supervised learning task in real-world applications. The problem occurs, in the binary case, when the number of instances in one class significantly outnumbers the number of instances in the other class. This situation is a handicap when trying to identify the minority class, as the learning algorithms are not usually adapted to such characteristics.The approaches to deal with the problem of imbalanced datasets fall into two major categories: data sampling and algorithmic modification. Cost-sensitive learning solutions incorporating both the data and algorithm level approaches assume higher misclassification costs with samples in the minority class and seek to minimize high cost errors. Nevertheless, there is not a full exhaustive comparison between those models which can help us to determine the most appropriate one under different scenarios.The main objective of this work is to analyze the performance of data level proposals against algorithm level proposals focusing in cost-sensitive models and versus a hybrid procedure that combines those two approaches. We will show, by means of a statistical comparative analysis, that we cannot highlight an unique approach among the rest. This will lead to a discussion about the data intrinsic characteristics of the imbalanced classification problem which will help to follow new paths that can lead to the improvement of current models mainly focusing on class overlap and dataset shift in imbalanced classification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号