首页 | 本学科首页   官方微博 | 高级检索  
     


When is the Naive Bayes approximation not so naive?
Authors:Christopher R Stephens  Hugo Flores Huerta  Ana Ruíz Linares
Affiliation:1.C3 Centro de Ciencias de la Complejidad,Universidad Nacional Autónoma de México,Mexico,Mexico;2.Instituto de Ciencias Nucleares,Universidad Nacional Autónoma de México,Mexico,Mexico;3.IIMAS,Universidad Nacional Autónoma de México,Mexico,Mexico;4.Instituto Tecnológico de Minatitlán,Minatitlán,Mexico
Abstract:The Naive Bayes approximation (NBA) and associated classifier are widely used and offer robust performance across a large spectrum of problem domains. As it depends on a very strong assumption—independence among features—this has been somewhat puzzling. Various hypotheses have been put forward to explain its success and many generalizations have been proposed. In this paper we propose a set of “local” error measures—associated with the likelihood functions for subsets of attributes and for each class—and show explicitly how these local errors combine to give a “global” error associated to the full attribute set. By so doing we formulate a framework within which the phenomenon of error cancelation, or augmentation, can be quantified and its impact on classifier performance estimated and predicted a priori. These diagnostics allow us to develop a deeper and more quantitative understanding of why the NBA is so robust and under what circumstances one expects it to break down. We show how these diagnostics can be used to select which features to combine and use them in a simple generalization of the NBA, applying the resulting classifier to a set of real world data sets.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号