首页 | 本学科首页   官方微博 | 高级检索  
     

图像分类中的对抗补丁研究综述
引用本文:杨杰.图像分类中的对抗补丁研究综述[J].移动信息.新网络,2023,45(1):144-146.
作者姓名:杨杰
作者单位:广州大学 广州 510006
摘    要:近年来,深度神经网络模型的安全性与鲁棒性成为了备受关注的重要问题。从前人们常探讨的对抗样本攻击,在物理世界中真正实施攻击的能力较弱,这促使了对抗补丁攻击的出现。文中介绍了图像分类模型中典型的对抗补丁攻击与防御方法相关研究,对其中的特点进行了分析,最后总结了对抗补丁研究中仍存在的主要挑战,并对未来的研究方向进行了展望。

关 键 词:图像分类  深度学习  对抗补丁  对抗补丁防御
收稿时间:2022/10/11 0:00:00

Review of Adversarial Patches for Image Classification
YANG Jie.Review of Adversarial Patches for Image Classification[J].Mobile Information,2023,45(1):144-146.
Authors:YANG Jie
Affiliation:Guangzhou University, Guangzhou 510006 , China
Abstract:In recent years, the security and robustness of deep neural network models have become an important issue of great interest. The adversarial sample attack, which was often explored before, is actually weak in implementing the attack in the physical world, which has prompted the emergence of adversarial patch attacks. This paper presents research related to typical adversarial patch attacks and defense methods in image classification, analyzes the characteristic. Finally, we will summarize the main challenges that still exist in adversarial patch research and provides an outlook on future research directions.
Keywords:
点击此处可从《移动信息.新网络》浏览原始摘要信息
点击此处可从《移动信息.新网络》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号