首页 | 本学科首页   官方微博 | 高级检索  
     


Speeding up algorithm selection using average ranking and active testing by introducing runtime
Authors:Salisu Mamman Abdulrahman  Pavel Brazdil  Jan N van Rijn  Joaquin Vanschoren
Affiliation:1.Kano University of Science and Technology,Wudil,Nigeria;2.LIAAD,INESC TEC, Rua Dr. Roberto Frias,Porto,Portugal;3.Faculdade de Economia,Universidade do Porto, Rua Dr. Roberto Frias,Porto,Portugal;4.University of Freiburg,Freiburg,Germany;5.Leiden Institute of Advanced Computer Science,Leiden University,Leiden,The Netherlands;6.Eindhoven University of Technology,Eindhoven,Netherlands
Abstract:Algorithm selection methods can be speeded-up substantially by incorporating multi-objective measures that give preference to algorithms that are both promising and fast to evaluate. In this paper, we introduce such a measure, A3R, and incorporate it into two algorithm selection techniques: average ranking and active testing. Average ranking combines algorithm rankings observed on prior datasets to identify the best algorithms for a new dataset. The aim of the second method is to iteratively select algorithms to be tested on the new dataset, learning from each new evaluation to intelligently select the next best candidate. We show how both methods can be upgraded to incorporate a multi-objective measure A3R that combines accuracy and runtime. It is necessary to establish the correct balance between accuracy and runtime, as otherwise time will be wasted by conducting less informative tests. The correct balance can be set by an appropriate parameter setting within function A3R that trades off accuracy and runtime. Our results demonstrate that the upgraded versions of Average Ranking and Active Testing lead to much better mean interval loss values than their accuracy-based counterparts.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号