首页 | 本学科首页   官方微博 | 高级检索  
     


Model selection for support vector machines via uniform design
Authors:Chien-Ming Huang  Dennis KJ Lin
Affiliation:a Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, 43 Keelung Road, Section 4, Taipei 106, Taiwan
b Department of Supply Chain and Information Systems, Pennsylvania State University, University Park, PA 16802, USA
c Institute of Statistical Science, Academia Sinica, Taipei 115, Taiwan
Abstract:The problem of choosing a good parameter setting for a better generalization performance in a learning task is the so-called model selection. A nested uniform design (UD) methodology is proposed for efficient, robust and automatic model selection for support vector machines (SVMs). The proposed method is applied to select the candidate set of parameter combinations and carry out a k-fold cross-validation to evaluate the generalization performance of each parameter combination. In contrast to conventional exhaustive grid search, this method can be treated as a deterministic analog of random search. It can dramatically cut down the number of parameter trials and also provide the flexibility to adjust the candidate set size under computational time constraint. The key theoretic advantage of the UD model selection over the grid search is that the UD points are “far more uniform”and “far more space filling” than lattice grid points. The better uniformity and space-filling phenomena make the UD selection scheme more efficient by avoiding wasteful function evaluations of close-by patterns. The proposed method is evaluated on different learning tasks, different data sets as well as different SVM algorithms.
Keywords:Discrepancy measure  Gaussian kernel  k-Fold cross-validation  Model selection  Number-theoretic methods  Quasi-Monte Carlo  Support vector machine  Uniform design
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号