首页 | 本学科首页   官方微博 | 高级检索  
     

对抗彩色贴片:一种针对DNNs的对抗攻击手段
作者姓名:胡承饮  陈小前  师炜文  田玲
作者单位:电子科技大学,军事科学院国防科技创新研究院,电子科技大学,电子科技大学
基金项目:中央高校基本科研基金(ZYGX2020ZB034,ZYGX2021J019)
摘    要:深度神经网络(deep neural networks,DNNs)在图像分类、分割、物体检测等计算机视觉应用方面表现出了先进的性能。然而,最近的研究进展表明,DNNs很容易受到输入数据的人工数字扰动(即对抗性攻击)的干扰。深度神经网络的分类准确率受到其训练数据集的数据分布的显著影响,而输入图像的颜色空间受到扭曲或扰动会产生分布不均匀的数据,这使深度神经网络更容易对它们进行错误分类。提出了一种简单且高效的攻击手段——对抗彩色贴片(AdvCS),利用粒子群优化算法优化彩色贴片的物理参数,实现物理环境下的有效攻击。首先,提出了一个图片背景颜色变化的数据集,通过在ImageNet的一个子集上用27个不同的组合改变他们的RGB通道颜色,研究颜色变化对DNNs性能的影响。在提出的数据集上对几种最先进的DNNs架构进行了实验,结果显示颜色变化和分类准确率损失之间存在显著相关性。此外,基于ResNet 50架构,在提出的数据集上演示了最近提出的鲁棒训练技术和策略(如Augmix、Revisiting、Normalizer Free)的一些性能实验。实验结果表明,这些鲁棒训练技术可以提高深度神经网络对颜色变化的鲁棒性。然后,使用彩色半透明贴片作为物理扰动,利用粒子群优化算法优化其物理参数,将其置于摄像头上执行物理攻击,实验结果验证了提出的方法的有效性。

关 键 词:深度神经网络  计算机视觉  扰动  分类准确率  对抗性攻击  鲁棒性
收稿时间:2023/2/28 0:00:00
修稿时间:2023/3/31 0:00:00

Adversarial Color Sticker: An Adversarial Attack to DNNs
Authors:HuChengYin  ChenXiaoQian  ShiWeiWen and TianLing
Affiliation:University of Electronic Science and Technology of China,University of Electronic Science and Technology of China
Abstract:Deep neural networks(DNNs) have shown state of the art performance for computer vision applications like image classification, segmentation and object detection.Whereas recent advances have shown their vulnerability to manual digital perturbations in the input data, namely adversarial attacks.The accuracy of deep neural networks is significantly affected by the data distribution of their training dataset.Distortions or perturbations on color space of input images will generate out of distribution data, which tends to result in misclassification in deep neural networks.In view of this problem, this paper proposes a simple and efficient attack method, namely the adversarial color sticker(AdvCS), which uses particle swarm optimization algorithm to optimize the physical parameters of color sticker to achieve effective attack in physical environment. Firstly, a color variation dataset by distorting their RGB color on a subset of the ImageNet with 27 different combinations was presented. Then experiments on several state of the art DNN architectures on the proposed dataset were carried out, and the result shows a significant correlation between color variation and loss of accuracy. Furthermore, based on the ResNet 50 architecture, some experiments of the performance through recently proposed robust training techniques and strategies, such as Augmix, Revisiting, and Normalizer Free, on the proposed dataset were demonstrated. Experimental results indicate that these robust training techniques can improve the robustness of deep neural networks to color variation.Finally, the color translucent sticker was taken as a physical perturbation, its physical parameters were optimized by particle swarm optimization algorithm, and were placed on the camera to perform physical attacks. The experimental results verified the effectiveness of our proposed method.
Keywords:deep neural networks  computer vision  perturbations  classification accuracy  adversarial attack  robustness
点击此处可从《》浏览原始摘要信息
点击此处可从《》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号