首页 | 本学科首页   官方微博 | 高级检索  
     


Hardness results for neural network approximation problems
Affiliation:1. Research School of Information Sciences and Engineering, Australian National University, Canberra ACT 0200, Australia;2. Department of Computer Science, Technion, Haifa 32000, Israel
Abstract:We consider the problem of efficiently learning in two-layer neural networks. We investigate the computational complexity of agnostically learning with simple families of neural networks as the hypothesis classes. We show that it is NP-hard to find a linear threshold network of a fixed size that approximately minimizes the proportion of misclassified examples in a training set, even if there is a network that correctly classifies all of the training examples. In particular, for a training set that is correctly classified by some two-layer linear threshold network with k hidden units, it is NP-hard to find such a network that makes mistakes on a proportion smaller than c/k2 of the examples, for some constant c. We prove a similar result for the problem of approximately minimizing the quadratic loss of a two-layer network with a sigmoid output unit.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号