首页 | 本学科首页   官方微博 | 高级检索  
     


Enlarging the Margins in Perceptron Decision Trees
Authors:Kristin P. Bennett  Nello Cristianini  John Shawe-Taylor  Donghui Wu
Affiliation:(1) Dept of Mathematical Sciences, Rensselaer Polytechnic Institute, 110 8th St., Troy, NY 12180, USA;(2) Dept of Computer Science, Royal Holloway, University of London, Egham, Surrey, TW20 0EX, UK;(3) Dept of Computer Science, Royal Holloway, University of London, Egham, Surrey, TW20 0EX, UK;(4) Dept of Mathematical Sciences, Rensselaer Polytechnic Institute, 110 8th St., Troy, NY 12180, USA
Abstract:Capacity control in perceptron decision trees is typically performed by controlling their size. We prove that other quantities can be as relevant to reduce their flexibility and combat overfitting. In particular, we provide an upper bound on the generalization error which depends both on the size of the tree and on the margin of the decision nodes. So enlarging the margin in perceptron decision trees will reduce the upper bound on generalization error. Based on this analysis, we introduce three new algorithms, which can induce large margin perceptron decision trees. To assess the effect of the large margin bias, OC1 (Journal of Artificial Intelligence Research, 1994, 2, 1–32.) of Murthy, Kasif and Salzberg, a well-known system for inducing perceptron decision trees, is used as the baseline algorithm. An extensive experimental study on real world data showed that all three new algorithms perform better or at least not significantly worse than OC1 on almost every dataset with only one exception. OC1 performed worse than the best margin-based method on every dataset.
Keywords:capacity control  decision trees  perceptron  learning theory  learning algorithm
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号