首页 | 本学科首页   官方微博 | 高级检索  
     

基于粒子群优化的对抗样本生成算法
引用本文:钱亚冠, 卢红波, 纪守领, 周武杰, 吴淑慧, 云本胜, 陶祥兴, 雷景生. 基于粒子群优化的对抗样本生成算法[J]. 电子与信息学报, 2019, 41(7): 1658-1665. doi: 10.11999/JEIT180777
作者姓名:钱亚冠  卢红波  纪守领  周武杰  吴淑慧  云本胜  陶祥兴  雷景生
作者单位:1.浙江科技学院理学院/大数据学院 杭州 310023;2.浙江大学计算机学院 杭州 310027;3.浙江科技学院电子与信息工程学院 杭州 310023
基金项目:浙江省自然科学基金;浙江省自然科学基金;浙江省公益技术应用研究项目;国家自然科学基金;国家自然科学基金;国家自然科学基金
摘    要:随着机器学习被广泛的应用,其安全脆弱性问题也突显出来。该文提出一种基于粒子群优化(PSO)的对抗样本生成算法,揭示支持向量机(SVM)可能存在的安全隐患。主要采用的攻击策略是篡改测试样本,生成对抗样本,达到欺骗SVM分类器,使其性能失效的目的。为此,结合SVM在高维特征空间的线性可分的特点,采用PSO方法寻找攻击显著性特征,再利用均分方法逆映射回原始输入空间,构建对抗样本。该方法充分利用了特征空间上线性模型上易寻优的特点,同时又利用了原始输入空间篡改数据的可解释性优点,使原本难解的优化问题得到实现。该文对2个公开数据集进行实验,实验结果表明,该方法通过不超过7%的小扰动量生成的对抗样本均能使SVM分类器失效,由此证明了SVM存在明显的安全脆弱性。

关 键 词:机器学习   支持向量机   探测攻击   显著性扰动   对抗样本
收稿时间:2018-08-06
修稿时间:2019-01-28

Adversarial Example Generation Based on Particle Swarm Optimization
Yaguan QIAN, Hongbo LU, Shouling JI, Wujie ZHOU, Shuhui WU, Bensheng YUN, Xiangxing TAO, Jingsheng LEI. Adversarial Example Generation Based on Particle Swarm Optimization[J]. Journal of Electronics & Information Technology, 2019, 41(7): 1658-1665. doi: 10.11999/JEIT180777
Authors:Yaguan QIAN  Hongbo LU  Shouling JI  Wujie ZHOU  Shuhui WU  Bensheng YUN  Xiangxing TAO  Jingsheng LEI
Affiliation:1. School of Science/School of Big-data Science, Zhejiang University of Science and Technology, Hangzhou 310023, China;2. School of Computer Science, Zhejiang University, Hangzhou 310027, China;3. School of Electronic and Information Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, China
Abstract:As machine learning is widely applied to various domains, its security vulnerability is also highlighted. A PSO (Particle Swarm Optimization) based adversarial example generation algorithm is proposed to reveal the potential security risks of Support Vector Machine (SVM). The adversarial examples, generated by slightly crafting the legitimate samples, can mislead SVM classifier to give wrong classification results. Using the linear separable property of SVM in high-dimensional feature space, PSO is used to find the salient features, and then the average method is used to map back to the original input space to construct the adversarial example. This method makes full use of the easily finding salient features of linear models in the feature space, and the interpretable advantages of the original input space. Experimental results show that the proposed method can fool SVM classifier by using the adversarial example generated by less than 7 % small perturbation, thus proving that SVM has obvious security vulnerability.
Keywords:Machine learning  Support Vector Machine(SVM)  Exploring attacks  Salient perpetuation  Adversarial example
本文献已被 万方数据 等数据库收录!
点击此处可从《电子与信息学报》浏览原始摘要信息
点击此处可从《电子与信息学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号