首页 | 本学科首页   官方微博 | 高级检索  
     


Neural Network Output Optimization Using Interval Analysis
Authors:de Weerdt   E. Chu   Q.P. Mulder   J.A.
Affiliation:Control & Simulation Div., Delft Univ. of Technol., Delft;
Abstract:The problem of output optimization within a specified input space of neural networks (NNs) with fixed weights is discussed in this paper. The problem is (highly) nonlinear when nonlinear activation functions are used. This global optimization problem is encountered in the reinforcement learning (RL) community. Interval analysis is applied to guarantee that all solutions are found to any degree of accuracy with guaranteed bounds. The major drawbacks of interval analysis, i.e., dependency effect and high-computational load, are both present for the problem of NN output optimization. Taylor models (TMs) are introduced to reduce these drawbacks. They have excellent convergence properties for small intervals. However, the dependency effect still remains and is even made worse when evaluating large input domains. As an alternative to TMs, a different form of polynomial inclusion functions, called the polynomial set (PS) method, is introduced. This new method has the property that the bounds on the network output are tighter or at least equal to those obtained through standard interval arithmetic (IA). Experiments show that the PS method outperforms the other methods for the NN output optimization problem.
Keywords:
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号