首页 | 本学科首页   官方微博 | 高级检索  
     


The Racing Algorithm: Model Selection for Lazy Learners
Authors:Oden Maron  Andrew W Moore
Affiliation:(1) M.I.T. Artificial Intelligence Lab, NE45-755, 545 Technology Square, Cambridge, MA, 02139. E-mail;(2) Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA, 15213. E-mail
Abstract:Given a set of models and some training data, we would like to find the model that best describes the data. Finding the model with the lowest generalization error is a computationally expensive process, especially if the number of testing points is high or if the number of models is large. Optimization techniques such as hill climbing or genetic algorithms are helpful but can end up with a model that is arbitrarily worse than the best one or cannot be used because there is no distance metric on the space of discrete models. In this paper we develop a technique called ldquoracingrdquo that tests the set of models in parallel, quickly discards those models that are clearly inferior and concentrates the computational effort on differentiating among the better models. Racing is especially suitable for selecting among lazy learners since training requires negligible expense, and incremental testing using leave-one-out cross validation is efficient. We use racing to select among various lazy learning algorithms and to find relevant features in applications ranging from robot juggling to lesion detection in MRI scans.
Keywords:lazy learning  model selection  cross validation  optimization  attribute selection
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号