首页 | 本学科首页   官方微博 | 高级检索  
     


Controlled descent training
Authors:Viktor Andersson  Balázs Varga  Vincent Szolnoky  Andreas Syrén  Rebecka Jörnsten  Balázs Kulcsár
Affiliation:1. R&D, Centiro AB, Borås, Sweden;2. Chalmers University of Technology, Gothenburg, Sweden;3. R&D, Centiro AB, Borås, Sweden

Chalmers University of Technology, Gothenburg, Sweden

Abstract:In this work, a novel and model-based artificial neural network (ANN) training method is developed supported by optimal control theory. The method augments training labels in order to robustly guarantee training loss convergence and improve training convergence rate. Dynamic label augmentation is proposed within the framework of gradient descent training where the convergence of training loss is controlled. First, we capture the training behavior with the help of empirical Neural Tangent Kernels (NTK) and borrow tools from systems and control theory to analyze both the local and global training dynamics (e.g., stability, reachability). Second, we propose to dynamically alter the gradient descent training mechanism via fictitious labels as control inputs and an optimal state feedback policy. In this way, we enforce locally optimal and convergent training behavior. The novel algorithm, Controlled Descent Training (CDT), guarantees local convergence. CDT unleashes new potentials in the analysis, interpretation, and design of ANN architectures. The applicability of the method is demonstrated on standard regression and classification problems.
Keywords:convergent learning  gradient decent training  label augmentation  label selection  neural Tangent Kernel  optimal labels
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号