首页 | 本学科首页   官方微博 | 高级检索  
     

面向边缘智能的两阶段对抗知识迁移方法
引用本文:钱亚冠,马骏,何念念,王滨,顾钊铨,凌祥,Wassim Swaileh. 面向边缘智能的两阶段对抗知识迁移方法[J]. 软件学报, 2022, 33(12): 4504-4516
作者姓名:钱亚冠  马骏  何念念  王滨  顾钊铨  凌祥  Wassim Swaileh
作者单位:浙江科技学院 大数据学院,浙江 杭州 310023;杭州海康威视网络与信息安全实验室,浙江 杭州 310052;广州大学 网络空间先进技术研究院,广东 广州 510006;浙江大学 计算机科学与技术学院,浙江 杭州 310058;CY Cergy Paris University, ETIS Research Laboratory, Paris 95032
基金项目:浙江省自然科学基金(LY17F020011);国家重点研发计划(2018YFB2100400);国家自然科学基金(61902082);
摘    要:对抗样本的出现,对深度学习的鲁棒性提出了挑战.随着边缘智能的兴起,如何在计算资源有限的边缘设备上部署鲁棒的精简深度学习模型,是一个有待解决的问题.由于精简模型无法通过常规的对抗训练获得良好的鲁棒性,提出两阶段对抗知识迁移的方法,先将对抗知识从数据向模型迁移,然后将复杂模型获得的对抗知识向精简模型迁移.对抗知识以对抗样本的数据形式蕴含,或以模型决策边界的形式蕴含.具体而言,利用云平台上的GPU集群对复杂模型进行对抗训练,实现对抗知识从数据向模型迁移;利用改进的蒸馏技术将对抗知识进一步从复杂模型向精简模型的迁移,最后提升边缘设备上精简模型的鲁棒性.在MNIST,CIFAR-10和CIFAR-100这3个数据集上进行验证,实验结果表明:提出的这种两阶段对抗知识迁移方法可以有效地提升精简模型的性能和鲁棒性,同时加快训练过程的收敛性.

关 键 词:对抗样本  对抗训练  知识迁移  知识蒸馏
收稿时间:2020-03-06
修稿时间:2021-03-08

Two-stage Adversarial Knowledge Transfer for Edge Intelligence
QIAN Ya-Guan,MA Jun,HE Nian-Nian,WANG Bin,GU Zhao-Quan,LING Xiang,Wassim Swaileh. Two-stage Adversarial Knowledge Transfer for Edge Intelligence[J]. Journal of Software, 2022, 33(12): 4504-4516
Authors:QIAN Ya-Guan  MA Jun  HE Nian-Nian  WANG Bin  GU Zhao-Quan  LING Xiang  Wassim Swaileh
Affiliation:School of Big Data Science, Zhejiang University of Science and Technology, Hangzhou 310023, China;Network and Information Security Laboratory of Hangzhou Hikvision Digital Technology Co. Ltd., Hangzhou 310052, China;Cyberspace Institute of Advanced Technology (CIAT), Guangzhou University, Guangzhou 510006, China;College of Computer Science and Technology, Zhejiang University, Hangzhou 310058, China; CY Cergy Paris University, ETIS Research Laboratory, Paris 95032, France
Abstract:The emergence of adversarial examples brings challenges to the robustness of deep learning. With the development of edge intelligence, how to train a robust and compact deep learning mode on edge devices with limited computing resources is also a challenging problem. Since compact models cannot obtain sufficient robustness through conventional adversarial training, a method called two-stage adversarial knowledge transfer is proposed. The method transfers adversarial knowledge from data to models and complex models to compact models. The so-called adversarial knowledge has two forms, one is contained in data with the form of adversarial examples, and the other is contained in models with the form of decision boundary. The GPU clusters of cloud center is first leveraged to train the complex model with adversarial examples to realize the transfer of adversarial knowledge from data to models, and then an improved distillation approach is leveraged to realize the further transfer of adversarial knowledge from complex models to compact models on edge nodes. The experiments over MNIST and CIFAR-10 show that this two-stage adversarial knowledge transfers can efficiently improve the robustness and convergence of compact models.
Keywords:adversarial examples  adversarial training  knowledge transfer  knowledge distillation
本文献已被 万方数据 等数据库收录!
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号