首页 | 本学科首页   官方微博 | 高级检索  
     

基于GAN的对抗样本生成研究
引用本文:孙曦音,封化民,刘飚,张健毅.基于GAN的对抗样本生成研究[J].计算机应用与软件,2019,36(7):202-207,248.
作者姓名:孙曦音  封化民  刘飚  张健毅
作者单位:西安电子科技大学 陕西 西安710071;西安电子科技大学 陕西 西安710071;北京电子科技学院 北京100070;北京电子科技学院 北京100070
摘    要:深度卷积神经网络在图像分类、目标检测和人脸识别等任务上取得了较好性能,但其在面临对抗攻击时容易发生误判。为了提高卷积神经网络的安全性,针对图像分类中的定向对抗攻击问题,提出一种基于生成对抗网络的对抗样本生成方法。利用类别概率向量重排序函数和生成对抗网络,在待攻击神经网络内部结构未知的前提下对其作对抗攻击。实验结果显示,提出的方法在对样本的扰动不超过5%的前提下,定向对抗攻击的平均成功率较对抗变换网络提高了1.5%,生成对抗样本所需平均时间降低了20%。

关 键 词:对抗样本  生成对抗网络  深度学习  分类模型

ADVERSARIAL EXAMPLES GENERATION BASED ON GAN
Sun Xiyin,Feng Huamin,Liu Biao,Zhang Jianyi.ADVERSARIAL EXAMPLES GENERATION BASED ON GAN[J].Computer Applications and Software,2019,36(7):202-207,248.
Authors:Sun Xiyin  Feng Huamin  Liu Biao  Zhang Jianyi
Affiliation:(Xidian University,Xi’an 710071,Shaanxi,China;Beijing Electronic Science and Technology Institution,Beijing 100070,China)
Abstract:Deep convolution neural network has achieved good performance in image classification,target detection and face recognition. At the same time,some studies have found that deep convolution neural network is prone to misjudgment when facing adversarial attack. In order to improve the security of convolutional neural network,aiming at the problem of directional countermeasure attack in image classification,we proposed adversarial examples generation based on GAN. Using the re-ordering function of the class probability vector and GAN,the antagonistic attack was made on the premise that the internal structure of the neural network to be attacked was unknown. The experimental results show that the method improves the average success rate of directional countermeasure attack by 1.5% and reduces the average time required to generate countermeasure samples by 20% when the perturbation to the samples is not more than 5%.
Keywords:Adversarial examples  GAN  Deep learning  Classification model
本文献已被 维普 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号