首页 | 官方网站   微博 | 高级检索  
     


Squeezing More Past Knowledge for Online Class-Incremental Continual Learning
Authors:Da Yu  Mingyi Zhang  Mantian Li  Fusheng Zha  Junge Zhang  Lining Sun  Kaiqi Huang
Abstract:Continual learning (CL) studies the problem of learning to accumulate knowledge over time from a stream of data. A crucial challenge is that neural networks suffer from performance degradation on previously seen data, known as catastrophic forgetting, due to allowing parameter sharing. In this work, we consider a more practical online class-incremental CL setting, where the model learns new samples in an online manner and may continuously experience new classes. Moreover, prior knowledge is unavailable during training and evaluation. Existing works usually explore sample usages from a single dimension, which ignores a lot of valuable supervisory information. To better tackle the setting, we propose a novel replay-based CL method, which leverages multi-level representations produced by the intermediate process of training samples for replay and strengthens supervision to consolidate previous knowledge. Specifically, besides the previous raw samples, we store the corresponding logits and features in the memory. Furthermore, to imitate the prediction of the past model, we construct extra constraints by leveraging multi-level information stored in the memory. With the same number of samples for replay, our method can use more past knowledge to prevent interference. We conduct extensive evaluations on several popular CL datasets, and experiments show that our method consistently outperforms state-of-the-art methods with various sizes of episodic memory. We further provide a detailed analysis of these results and demonstrate that our method is more viable in practical scenarios. 
Keywords:Catastrophic forgetting  class-incremental learning  continual learning (CL)  experience replay
点击此处可从《》浏览原始摘要信息
点击此处可从《》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号