首页 | 本学科首页   官方微博 | 高级检索  
     


Lie group continual meta learning algorithm
Authors:Jiang  Mengjuan  Li  Fanzhang
Affiliation:1.School of Computer Science and Technology, Soochow University, Suzhou, China
;
Abstract:

Humans can use acquired experience to learn new skills quickly and without forgetting the knowledge they already have. However, the neural network cannot do continual learning like humans, because it is easy to fall into the stability-plasticity dilemma and lead to catastrophic forgetting. Since meta-learning with the already acquired knowledge as a priori can directly optimize the final goal, this paper proposes LGCMLA (Lie Group Continual Meta Learning Algorithm) based on meta-learning, this algorithm is an improvement of CMLA (Continual Meta Learning Algorithm) proposed by Jiang et al. On the one hand, LGCMLA enhances the continuity between tasks by changing the inner-loop update rule (from using random initialization parameters for each task to using the updated parameters of the previous task for the subsequent task). On the other hand, it uses orthogonal groups to limit the parameter space and adopts the natural Riemannian gradient descent to accelerate the convergence speed. It not only corrects the shortcomings of poor convergence and stability of CMLA, but also further improves the generalization performance of the model and solves the stability-plasticity dilemma more effectively. Experiments on miniImageNet, tieredImageNet and Fewshot-CIFAR100 (Canadian Institute For Advanced Research) datasets prove the effectiveness of LGCMLA. Especially compared to MAML (Model-Agnostic Meta-Learning) with standard four-layer convolution, the accuracy of 1 shot and 5 shot is improved by 16.4% and 17.99% respectively under the setting of 5-way on miniImageNet.

Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号