首页 | 本学科首页   官方微博 | 高级检索  
     

颜色模型扰动的语义对抗样本生成方法
引用本文:王舒雅,刘强春,陈云芳,王福俊. 颜色模型扰动的语义对抗样本生成方法[J]. 计算机工程与应用, 2021, 57(15): 163-170. DOI: 10.3778/j.issn.1002-8331.2003-0137
作者姓名:王舒雅  刘强春  陈云芳  王福俊
作者单位:1.南京邮电大学 通达学院,江苏 扬州 225127 2.南京邮电大学 计算机学院,南京 2100233.南京航空航天大学 计算机科学与技术学院,南京 211106
摘    要:卷积神经网络是一种具有强大特征提取能力的深度神经网络,其在众多领域得到了广泛应用.但是,研究表明卷积神经网络易受对抗样本攻击.不同于传统的以梯度迭代生成对抗扰动的方法,提出了一种基于颜色模型的语义对抗样本生成方法,利用人类视觉和卷积模型在识别物体中表现出的形状偏好特性,通过颜色模型的扰动变换来生成对抗样本.在样本生成过...

关 键 词:对抗样本  卷积神经网络  语义信息  黑盒攻击

Semantic Adversarial Examples Generation Method for Color Model Disturbances
WANG Shuya,LIU Qiangchun,CHEN Yunfang,WANG Fujun. Semantic Adversarial Examples Generation Method for Color Model Disturbances[J]. Computer Engineering and Applications, 2021, 57(15): 163-170. DOI: 10.3778/j.issn.1002-8331.2003-0137
Authors:WANG Shuya  LIU Qiangchun  CHEN Yunfang  WANG Fujun
Affiliation:1.College of Tongda, Nanjing University of Posts and Telecommunications, Yangzhou, Jiangsu 225127, China2.College of Computer, Nanjing University of Posts and Telecommunications, Nanjing 210023, China3.College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
Abstract:Convolutional neural network is a deep neural network with powerful feature extraction capabilities, it has been widely used in many fields. However, recent research shows that convolutional neural networks are vulnerable to adversarial attacks. Different from the traditional method of iteratively generating anti-perturbation by gradient, this paper proposes a color model-based method for generating semantic adversarial samples, which uses the shape preference characteristics of human vision and convolution model in object recognition, and generates the anti-disturbance sample by disturbing the color channel based on the transformation of color model. In the process of sample generation, it does not need the network parameters, loss function or related structure information of the target model, but only relies on the transformation of the color model and random disturbance of channel information. Therefore, it is a counter sample that can complete the black box attack.
Keywords:adversarial examples  convolutional neural network  semantic feature  black-box attack  
本文献已被 万方数据 等数据库收录!
点击此处可从《计算机工程与应用》浏览原始摘要信息
点击此处可从《计算机工程与应用》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号