On Comparing Classifiers: Pitfalls to Avoid and a Recommended Approach |
| |
Authors: | Steven L Salzberg |
| |
Affiliation: | (1) Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA |
| |
Abstract: | An important component of many data mining projects is finding a good classification algorithm, a process that requires very careful thought about experimental design. If not done very carefully, comparative studies of classification and other types of algorithms can easily result in statistically invalid conclusions. This is especially true when one is using data mining techniques to analyze very large databases, which inevitably contain some statistically unlikely data. This paper describes several phenomena that can, if ignored, invalidate an experimental comparison. These phenomena and the conclusions that follow apply not only to classification, but to computational experiments in almost any aspect of data mining. The paper also discusses why comparative analysis is more important in evaluating some types of algorithms than for others, and provides some suggestions about how to avoid the pitfalls suffered by many experimental studies. |
| |
Keywords: | classification comparative studies statistical methods |
本文献已被 SpringerLink 等数据库收录! |
|