首页 | 本学科首页   官方微博 | 高级检索  
     

基于熵及随机擦除的针对目标检测物理攻击的防御
作者姓名:高红超  周广治  戴娇  李昭星  韩冀中
作者单位:中国科学院信息工程研究所 北京 中国 100093;中国科学院信息工程研究所 北京 中国 100093;中国科学院大学 网络空间安全学院 北京 中国 100049
基金项目:本课题得到科技创新 2030-“新一代人工智能”重大项目(No. 2020AAA0140000)资助。
摘    要:物理攻击通过在图像中添加受扰动的对抗块使得基于深度神经网络(DNNs)的应用失效,对DNNs的安全性带来严重的挑战。针对物理攻击方法生成的对抗块与真实图像块之间的信息分布不同的特点,本文提出了能有效避免现有物理攻击的防御算法。该算法由基于熵的检测组件(Entropy-basedDetectionComponent,EDC)和随机擦除组件(RandomErasingComponent,REC)两部分组成。EDC组件采用熵值度量检测对抗块并对其灰度替换。该方法不仅能显著降低对抗块对模型推理的影响,而且不依赖大规模的训练数据。REC模块改进了深度学习通用训练范式。利用该方法训练得到的深度学习模型,在不改变现有网络结构的前提下,不仅能有效防御现有物理攻击,而且能显著提升图像分析效果。上述两个组件都具有较强的可转移性且不需要额外的训练数据,它们的有机结合构成了本文的防御策略。实验表明,本文提出的算法不仅能有效的防御针对目标检测的物理攻击(在Pascal VOC 2007上的平均精度(m AP)由31.3%提升到64.0%及在Inria数据集上由19.0%提升到41.0%),并且证明算法具有较好的...

关 键 词:对抗样本  物理攻击  对抗块  对抗防御  目标检测
收稿时间:2020/1/16 0:00:00
修稿时间:2020/3/13 0:00:00

Defense Against Physical Attacks on Object Detection Based on Entropy and Random Erasing
Authors:GAO Hongchao  ZHOU Guangzhi  DAI Jiao  LI Zhaoxing  HAN Jizhong
Affiliation:Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093, China;Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093, China;School of Cyber Security, University of Chinese Academy of Sciences, Beijing 100049, China
Abstract:Existing physical attack techniques for deep learning models mislead the deep neural networks (DNNs) inference process by adding perturbed adversarial patches to the attacked image, thereby making the application which based on DNNs invalid to achieve the purpose of the attack. Such attack methods are easy to implement and highly transferable, which bring serious challenges to the security of DNNs. The quantities of information contained in the adversarial patches generated by existing physical attack methods is usually higher than that of real natural scene image patches. Using this phenomenon, this paper proposes a defense algorithm with strong versatility and obvious defense effect. This algorithm consists of an Entropy-based Detection Component (EDC) and a Random Erasing Component (REC). EDC component uses entropy measurement to detect the perturbed adversarial patches and replace it with gray patches. This component can not only significantly reduce the impact of adversarial patches on model inferencing, but also does not rely on large-scale training data. The REC component improves the general training paradigm of DNNs. The deep learning model trained by REC can not only effectively defend against existing physical attacks , but also improve the image analysis effect significantly without changing the network structure. The above two components have strong transferability and do not need additional training data. Furthermore, we propose an efficient and transferable defense algorithm through the organic combination of two components. The experimental results on different data of the two image analysis tasks show that the defense algorithm proposed in this paper can not only effectively defend against physical attacks against object detection (the average accuracy (mAP) on Pascal VOC 2007 is increased from 31.3% to 64.0%, and on the Inria dataset is increased from 19.0% to 41.0%), but also prove that the algorithm has good transferability, which can defend against physical attacks of both image classification and object detection tasks.
Keywords:adversarial examples  physical attacks  adversarial patch  adversarial defense  object detection
点击此处可从《》浏览原始摘要信息
点击此处可从《》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号