首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Recently, mining from data streams has become an important and challenging task for many real-world applications such as credit card fraud protection and sensor networking. One popular solution is to separate stream data into chunks, learn a base classifier from each chunk, and then integrate all base classifiers for effective classification. In this paper, we propose a new dynamic classifier selection (DCS) mechanism to integrate base classifiers for effective mining from data streams. The proposed algorithm dynamically selects a single “best” classifier to classify each test instance at run time. Our scheme uses statistical information from attribute values, and uses each attribute to partition the evaluation set into disjoint subsets, followed by a procedure that evaluates the classification accuracy of each base classifier on these subsets. Given a test instance, its attribute values determine the subsets that the similar instances in the evaluation set have constructed, and the classifier with the highest classification accuracy on those subsets is selected to classify the test instance. Experimental results and comparative studies demonstrate the efficiency and efficacy of our method. Such a DCS scheme appears to be promising in mining data streams with dramatic concept drifting or with a significant amount of noise, where the base classifiers are likely conflictive or have low confidence. A preliminary version of this paper was published in the Proceedings of the 4th IEEE International Conference on Data Mining, pp 305–312, Brighton, UK Xingquan Zhu received his Ph.D. degree in Computer Science from Fudan University, Shanghai, China, in 2001. He spent four months with Microsoft Research Asia, Beijing, China, where he was working on content-based image retrieval with relevance feedback. From 2001 to 2002, he was a Postdoctoral Associate in the Department of Computer Science, Purdue University, West Lafayette, IN. He is currently a Research Assistant Professor in the Department of Computer Science, University of Vermont, Burlington, VT. His research interests include Data mining, machine learning, data quality, multimedia computing, and information retrieval. Since 2000, Dr. Zhu has published extensively, including over 40 refereed papers in various journals and conference proceedings. Xindong Wu is a Professor and the Chair of the Department of Computer Science at the University of Vermont. He holds a Ph.D. in Artificial Intelligence from the University of Edinburgh, Britain. His research interests include data mining, knowledge-based systems, and Web information exploration. He has published extensively in these areas in various journals and conferences, including IEEE TKDE, TPAMI, ACM TOIS, IJCAI, ICML, KDD, ICDM, and WWW, as well as 11 books and conference proceedings. Dr. Wu is the Editor-in-Chief of the IEEE Transactions on Knowledge and Data Engineering (by the IEEE Computer Society), the founder and current Steering Committee Chair of the IEEE International Conference on Data Mining (ICDM), an Honorary Editor-in-Chief of Knowledge and Information Systems (by Springer), and a Series Editor of the Springer Book Series on Advanced Information and Knowledge Processing (AI&KP). He is the 2004 ACM SIGKDD Service Award winner. Ying Yang received her Ph.D. in Computer Science from Monash University, Australia in 2003. Following academic appointments at the University of Vermont, USA, she currently holds a Research Fellow at Monash University, Australia. Dr. Yang is recognized for contributions in the fields of machine learning and data mining. She has published many scientific papers and book chapters on adaptive learning, proactive mining, noise cleansing and discretization. Contact her at yyang@mail.csse.monash.edu.au.  相似文献   

2.
Dynamic classifier selection (DCS) plays a strategic role in the field of multiple classifier systems (MCS). This paper proposes a study on the performances of DCS by Local Accuracy estimation (DCS-LA). To this end, upper bounds against which the performances can be evaluated are proposed. The experimental results on five datasets clearly show the effectiveness of the selection methods based on local accuracy estimates.  相似文献   

3.
In this paper, a measure of competence based on random classification (MCR) for classifier ensembles is presented. The measure selects dynamically (i.e. for each test example) a subset of classifiers from the ensemble that perform better than a random classifier. Therefore, weak (incompetent) classifiers that would adversely affect the performance of a classification system are eliminated. When all classifiers in the ensemble are evaluated as incompetent, the classification accuracy of the system can be increased by using the random classifier instead. Theoretical justification for using the measure with the majority voting rule is given. Two MCR based systems were developed and their performance was compared against six multiple classifier systems using data sets taken from the UCI Machine Learning Repository and Ludmila Kuncheva Collection. The systems developed had typically the highest classification accuracies regardless of the ensemble type used (homogeneous or heterogeneous).  相似文献   

4.
This work aims to connect two rarely combined research directions, i.e., non-stationary data stream classification and data analysis with skewed class distributions. We propose a novel framework employing stratified bagging for training base classifiers to integrate data preprocessing and dynamic ensemble selection methods for imbalanced data stream classification. The proposed approach has been evaluated based on computer experiments carried out on 135 artificially generated data streams with various imbalance ratios, label noise levels, and types of concept drift as well as on two selected real streams. Four preprocessing techniques and two dynamic selection methods, used on both bagging classifiers and base estimators levels, were considered. Experimentation results showed that, for highly imbalanced data streams, dynamic ensemble selection coupled with data preprocessing could outperform online and chunk-based state-of-art methods.  相似文献   

5.
提出了一种新的基于边缘分类能力排序准则,用于基于排序聚集(ordered aggregation,OA)的分类器选择算法.为了表征分类器的分类能力,使用随机参考分类器对原分类器进行模拟,从而获得分类能力的概率模型.为了提高分类器集成性能,将提出的基于边缘分类能力的排序准则与动态集成选择算法相结合,首先将特征空间划分成不同能力的区域,然后在每个划分内构造最优的分类器集成,最后使用动态集成选择算法对未知样本进行分类.在UCI数据集上进行的实验表明,对比现有的排序准则,边缘分类能力的排序准则效果更好,进一步实验表明,基于边缘分类能力的动态集成选择算法较现有分类器集成算法具有分类正确率更高、集成规模更小、分类时间更短的优势.  相似文献   

6.
Different classifiers with different characteristics and methodologies can complement each other and cover their internal weaknesses; so classifier ensemble is an important approach to handle the weakness of single classifier based systems. In this article we explore an automatic and fast function to approximate the accuracy of a given classifier on a typical dataset. Then employing the function, we can convert the ensemble learning to an optimisation problem. So, in this article, the target is to achieve a model to approximate the performance of a predetermined classifier over each arbitrary dataset. According to this model, an optimisation problem is designed and a genetic algorithm is employed as an optimiser to explore the best classifier set in each subspace. The proposed ensemble methodology is called classifier ensemble based on subspace learning (CEBSL). CEBSL is examined on some datasets and it shows considerable improvements.  相似文献   

7.
In this paper, a generalized adaptive ensemble generation and aggregation (GAEGA) method for the design of multiple classifier systems (MCSs) is proposed. GAEGA adopts an “over-generation and selection” strategy to achieve a good bias-variance tradeoff. In the training phase, different ensembles of classifiers are adaptively generated by fitting the validation data globally with different degrees. The test data are then classified by each of the generated ensembles. The final decision is made by taking into consideration both the ability of each ensemble to fit the validation data locally and reducing the risk of overfitting. In this paper, the performance of GAEGA is assessed experimentally in comparison with other multiple classifier aggregation methods on 16 data sets. The experimental results demonstrate that GAEGA significantly outperforms the other methods in terms of average accuracy, ranging from 2.6% to 17.6%.  相似文献   

8.
针对基于约束得分的特征选择容易受成对约束的组成和基数影响的问题, 提出了一种基于约束得分的动态集成选择算法(dynamic ensemble selection based on bagging constraint score, BCS-DES)。该算法将bagging约束得分(bagging constraint score, BCS)引入动态集成选择算法, 通过将样本空间划分为不同的区域, 使用多种群并行遗传算法为不同测试样本选择局部最优的分类集成, 达到提高分类精度的目的。在UCI实验数据集上进行的实验表明, BCS-DES算法较现有的特征选择算法受成对约束组成和基数影响更小, 效果更好。  相似文献   

9.
Abstract: The paper presents a novel machine learning algorithm used for training a compound classifier system that consists of a set of area classifiers. Area classifiers recognize objects derived from the respective competence area. Splitting feature space into areas and selecting area classifiers are two key processes of the algorithm; both take place simultaneously in the course of an optimization process aimed at maximizing the system performance. An evolutionary algorithm is used to find the optimal solution. A number of experiments have been carried out to evaluate system performance. The results prove that the proposed method outperforms each elementary classifier as well as simple voting.  相似文献   

10.

In dynamic ensemble selection (DES) techniques, only the most competent classifiers, for the classification of a specific test sample, are selected to predict the sample’s class labels. The key in DES techniques is estimating the competence of the base classifiers for the classification of each specific test sample. The classifiers’ competence is usually estimated according to a given criterion, which is computed over the neighborhood of the test sample defined on the validation data, called the region of competence. A problem arises when there is a high degree of noise in the validation data, causing the samples belonging to the region of competence to not represent the query sample. In such cases, the dynamic selection technique might select the base classifier that overfitted the local region rather than the one with the best generalization performance. In this paper, we propose two modifications in order to improve the generalization performance of any DES technique. First, a prototype selection technique is applied over the validation data to reduce the amount of overlap between the classes, producing smoother decision borders. During generalization, a local adaptive K-Nearest Neighbor algorithm is used to minimize the influence of noisy samples in the region of competence. Thus, DES techniques can better estimate the classifiers’ competence. Experiments are conducted using 10 state-of-the-art DES techniques over 30 classification problems. The results demonstrate that the proposed scheme significantly improves the classification accuracy of dynamic selection techniques.

  相似文献   

11.
提出了一种基于自适应距离度量的最小距离分类器集成方法,给出了个体分类器的生成方法。首先用Bootstrap技术对训练样本集进行可重复采样,生成若干个子样本集,应用生成的子样本集建立自适应距离度量模型,根据建立的模型对子样本集进行训练,生成个体分类器。在集成中,将结果用相对多数投票法集成最终的结论。采用UCI标准数据集实验,将该方法与已有方法进行了性能比较,结果表明基于自适应距离度量的最小距离分类器集成是最有效的。  相似文献   

12.
Features selection is the process of choosing the relevant subset of features from the high-dimensional dataset to enhance the performance of the classifier. Much research has been carried out in the present world for the process of feature selection. Algorithms such as Naïve Bayes (NB), decision tree, and genetic algorithm are applied to the high-dimensional dataset to select the relevant features and also to increase the computational speed. The proposed model presents a solution for selection of features using ensemble classifier algorithms. The proposed algorithm is the combination of minimum redundancy and maximum relevance (mRMR) and forest optimization algorithm (FOA). Ensemble-based algorithms such as support vector machine (SVM), K-nearest neighbor (KNN), and NB is further used to enhance the performance of the classifier algorithm. The mRMR-FOA is used to select the relevant features from the various datasets and 21% to 24% improvement is recorded in the feature selection. The ensemble classifier algorithms further improves the performance of the algorithm and provides accuracy of 96%.  相似文献   

13.
A data driven ensemble classifier for credit scoring analysis   总被引:2,自引:0,他引:2  
This study focuses on predicting whether a credit applicant can be categorized as good, bad or borderline from information initially supplied. This is essentially a classification task for credit scoring. Given its importance, many researchers have recently worked on an ensemble of classifiers. However, to the best of our knowledge, unrepresentative samples drastically reduce the accuracy of the deployment classifier. Few have attempted to preprocess the input samples into more homogeneous cluster groups and then fit the ensemble classifier accordingly. For this reason, we introduce the concept of class-wise classification as a preprocessing step in order to obtain an efficient ensemble classifier. This strategy would work better than a direct ensemble of classifiers without the preprocessing step. The proposed ensemble classifier is constructed by incorporating several data mining techniques, mainly involving optimal associate binning to discretize continuous values; neural network, support vector machine, and Bayesian network are used to augment the ensemble classifier. In particular, the Markov blanket concept of Bayesian network allows for a natural form of feature selection, which provides a basis for mining association rules. The learned knowledge is represented in multiple forms, including causal diagram and constrained association rules. The data driven nature of the proposed system distinguishes it from existing hybrid/ensemble credit scoring systems.  相似文献   

14.
章少平  梁雪春 《计算机应用》2015,35(5):1306-1309
传统的分类算法大都建立在平衡数据集的基础上,当样本数据不平衡时,这些学习算法的性能往往会明显下降.对于非平衡数据分类问题,提出了一种优化的支持向量机(SVM)集成分类器模型,采用KSMOTE和Bootstrap对非平衡数据进行预处理,生成相应的SVM模型并用复合形算法优化模型参数,最后利用优化的参数并行生成SVM集成分类器模型,采用投票机制得到分类结果.对5组UCI标准数据集进行实验,结果表明采用优化的SVM集成分类器模型较SVM模型、优化的SVM模型等分类精度有了明显的提升,同时验证了不同的bootNum取值对分类器性能效果的影响.  相似文献   

15.
选择性聚类融合研究进展   总被引:1,自引:0,他引:1  
传统的聚类融合方法通常是将所有产生的聚类成员融合以获得最终的聚类结果。在监督学习中,选择分类融合方法会获得更好的结果,从选择分类融合中得到启示,在聚类融合中应用这种方法被定义为选择性聚类融合。对选择性聚类融合关键技术进行了综述,讨论了未来的研究方向。  相似文献   

16.
The concept of a classifier competence is fundamental to multiple classifier systems (MCSs). In this study, a method for calculating the classifier competence is developed using a probabilistic model. In the method, first a randomised reference classifier (RRC) whose class supports are realisations of the random variables with beta probability distributions is constructed. The parameters of the distributions are chosen in such a way that, for each feature vector in a validation set, the expected values of the class supports produced by the RRC and the class supports produced by a modelled classifier are equal. This allows for using the probability of correct classification of the RRC as the competence of the modelled classifier. The competences calculated for a validation set are then generalised to an entire feature space by constructing a competence function based on a potential function model or regression. Three systems based on a dynamic classifier selection and a dynamic ensemble selection (DES) were constructed using the method developed. The DES based system had statistically significant higher average rank than the ones of eight benchmark MCSs for 22 data sets and a heterogeneous ensemble. The results obtained indicate that the full vector of class supports should be used for evaluating the classifier competence as this potentially improves performance of MCSs.  相似文献   

17.
为提高决策树的集成分类精度,介绍了一种基于特征变换的旋转森林分类器集成算法,通过对数据属性集的随机分割,并在属性子集上对抽取的子样本数据进行主成分分析,以构造新的样本数据,达到增大基分类器差异性及提高预测准确率的目的。在Weka平台下,分别采用Bagging、AdaBoost及旋转森林算法对剪枝与未剪枝的J48决策树分类算法进行集成的对比试验,以10次10折交叉验证的平均准确率为比较依据。结果表明旋转森林算法的预测精度优于其他两个算法,验证了旋转森林是一种有效的决策树分类器集成算法。  相似文献   

18.
针对肿瘤基因表达谱样本少,维数高的特点,提出一种用于肿瘤信息基因提取和亚型识别的集成分类器算法.该算法根据基因的Fisher比率值建立候选子集,再采用相关系数和互信息两种度量方法,分别构造反映基因共表达行为和调控关系的特征子集.粒子群优化算法分别与SVM和KNN构成两个基分类器,从候选子集中提取信息基因并对肿瘤亚型进行分类,最后利用绝对多数投票方法对基分类器的结果进行整合.G.Gordon肺癌亚型识别的实验结果表明了该算法的可行性和有效性.  相似文献   

19.
The primary effect of using a reduced number of classifiers is a reduction in the computational requirements during learning and classification time. In addition to this obvious result, research shows that the fusion of all available classifiers is not a guarantee of best performance but good results on the average. The much researched issue of whether it is more convenient to fuse or to select has become even more of interest in recent years with the development of the Online Boosting theory, where a limited set of classifiers is continuously updated as new inputs are observed and classifications performed. The concept of online classification has recently received significant interest in the computer vision community. Classifiers can be trained on the visual features of a target, casting the tracking problem into a binary classification one: distinguishing the target from the background.Here we discuss how to optimize the performance of a classifier ensemble employed for target tracking in video sequences. In particular, we propose the F-score measure as a novel means to select the members of the ensemble in a dynamic fashion. For each frame, the ensemble is built as a subset of a larger pool of classifiers selecting its members according to their F-score. We observed an overall increase in classification accuracy and a general tendency in redundancy reduction among the members of an f-score optimized ensemble. We carried out our experiments both on benchmark binary datasets and standard video sequences.  相似文献   

20.
基于深层特征和集成分类器的微博谣言检测研究   总被引:1,自引:0,他引:1  
微博中存在着大量的虚假信息甚至谣言,微博谣言的广泛传播影响社会稳定,损害个人和国家利益。为有效检测微博谣言,提出了一种基于深层特征和集成分类器的微博谣言检测方法。首先,对微博情感倾向性、微博传播过程和微博用户历史信息进行特征提取得到深层分类特征;然后利用分类特征训练集成分类器;最后利用集成分类器对微博谣言进行检测。实验结果表明,提出的基于深层特征和集成分类器的方法能够有效提高微博谣言检测的性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号