首页 | 本学科首页   官方微博 | 高级检索  
     

基于脑启发的类增量学习
引用本文:王伟,张志莹,郭杰龙,兰海,俞辉,魏宪.基于脑启发的类增量学习[J].计算机应用研究,2023,40(3):671-675+688.
作者姓名:王伟  张志莹  郭杰龙  兰海  俞辉  魏宪
作者单位:辽宁工程技术大学,辽宁工程技术大学,中国科学院福建物质结构研究所,中国科学院福建物质结构研究所,中国科学院福建物质结构研究所,中国科学院福建物质结构研究所
基金项目:国家自然科学基金青年基金资助项目(61701211);辽宁省教育厅基本科研项目(LJKZ0362);福建省科技计划资助项目(2021T3003,2021T3068);泉州市科技计划资助项目(2021C065L)
摘    要:现有的类增量学习方法多是采用存储数据或者扩展网络结构,但受内存资源限制不能有效缓解灾难性遗忘问题。针对这一问题,创新地提出基于脑启发生成式重放方法。首先,通过VAE-ACGAN模拟记忆自组织系统,提高生成伪样本的质量;再引入共享参数模块和私有参数模块,保护已提取的特征;最后,针对生成器中的潜在变量使用高斯混合模型,采样特定重放伪样本。在MNIST、Permuted MNIST和CIFAR-10数据集上的实验结果表明,所提方法的分类准确率分别为92.91%、91.44%和40.58%,显著优于其他类增量学习方法。此外,在MNIST数据集上,反向迁移和正向迁移指标达到了3.32%和0.83%,证明该方法实现任务的稳定性和可塑性之间的权衡,有效地防止了灾难性遗忘。

关 键 词:类增量学习  持续学习  灾难性遗忘  脑启发生成重放
收稿时间:2022/6/10 0:00:00
修稿时间:2023/2/10 0:00:00

Brain-inspired class incremental learning
Wang Wei,Zhang Zhiying,Guo Jielong,Lan Hai,Yu Hui and Wei Xian.Brain-inspired class incremental learning[J].Application Research of Computers,2023,40(3):671-675+688.
Authors:Wang Wei  Zhang Zhiying  Guo Jielong  Lan Hai  Yu Hui and Wei Xian
Affiliation:Liaoning Technical University,,,,,
Abstract:Most existing class incremental learning methods employ data storage or extended network structures, but they cannot effectively alleviate the catastrophic forgetting problem due to memory resource limitation. To solve this issue, this paper proposed a brain-inspired generative replay approach. Firstly, it used VAE-ACGAN to simulate the memory self-organizing system to improve the quality of the generated pseudo-samples. Then, it used a shared parameter module and a private parameter module to protect the extracted features. Finally, the potential variables in the generator used a Gaussian mixture model to select specific replay pseudo-samples. The experimental results of the proposed method on MNIST, Permuted MNIST and CIFAR-10 show that the classification accuracy of the proposed method is 92.91%, 91.44% and 40.58%, respectively, which is significantly better than other class incremental learning methods. Furthermore, the backward transfer and forward transfer metrics reach 3.32% and 0.83% on the MNIST dataset, demonstrating that the model achieves a trade-off between task stability and plasticity, effectively preventing catastrophic forgetting.
Keywords:class incremental learning  continual learning  catastrophic forgetting  brain-inspired generation of replay
点击此处可从《计算机应用研究》浏览原始摘要信息
点击此处可从《计算机应用研究》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号