Cascade Generalization |
| |
Authors: | João Gama Pavel Brazdil |
| |
Affiliation: | (1) LIACC, FEP, University of Porto, Rua Campo Alegre, 823 4150 Porto, Portugal;(2) LIACC, FEP, University of Porto, Rua Campo Alegre, 823 4150 Porto, Portugal |
| |
Abstract: | Using multiple classifiers for increasing learning accuracy is an active research area. In this paper we present two related methods for merging classifiers. The first method, Cascade Generalization, couples classifiers loosely. It belongs to the family of stacking algorithms. The basic idea of Cascade Generalization is to use sequentially the set of classifiers, at each step performing an extension of the original data by the insertion of new attributes. The new attributes are derived from the probability class distribution given by a base classifier. This constructive step extends the representational language for the high level classifiers, relaxing their bias. The second method exploits tight coupling of classifiers, by applying Cascade Generalization locally. At each iteration of a divide and conquer algorithm, a reconstruction of the instance space occurs by the addition of new attributes. Each new attribute represents the probability that an example belongs to a class given by a base classifier. We have implemented three Local Generalization Algorithms. The first merges a linear discriminant with a decision tree, the second merges a naive Bayes with a decision tree, and the third merges a linear discriminant and a naive Bayes with a decision tree. All the algorithms show an increase of performance, when compared with the corresponding single models. Cascade also outperforms other methods for combining classifiers, like Stacked Generalization, and competes well against Boosting at statistically significant confidence levels. |
| |
Keywords: | multiple models constructive induction combining classifiers merging classifiers |
本文献已被 SpringerLink 等数据库收录! |
|