首页 | 官方网站   微博 | 高级检索  
     

深度学习中对抗样本的构造及防御研究
作者姓名:段广晗  马春光  宋蕾  武朋
作者单位:1. 哈尔滨工程大学计算机科学与技术学院,黑龙江 哈尔滨 150001;2. 山东科技大学计算机科学与工程学院,山东 青岛 266590
基金项目:国家自然科学基金资助项目(No.61472097,No.61932005,No.U1936112);黑龙江省自然科学基金资助项目(No.JJ2019LH1770)。
摘    要:随着深度学习技术在计算机视觉、网络安全、自然语言处理等领域的进一步发展,深度学习技术逐渐暴露了一定的安全隐患。现有的深度学习算法无法有效描述数据本质特征,导致算法面对恶意输入时可能无法给出正确结果。以当前深度学习面临的安全威胁为出发点,介绍了深度学习中的对抗样本问题,梳理了现有的对抗样本存在性解释,回顾了经典的对抗样本构造方法并对其进行了分类,简述了近年来部分对抗样本在不同场景中的应用实例,对比了若干对抗样本防御技术,最后归纳对抗样本研究领域存在的问题并对这一领域的发展趋势进行了展望。

关 键 词:对抗样本  深度学习  安全威胁  防御技术  

Research on structure and defense of adversarial example in deep learning
Authors:DUAN Guanghan  MA Chunguang  SONG Lei  WU Peng
Affiliation:1. College of Computer Science and Technology,Harbin Engineering University,Harbin 150001,China;2. College of Computer Science and Engineering,Shandong University of Science and Technology,Qingdao 266590,China
Abstract:With the further promotion of deep learning technology in the fields of computer vision,network security and natural language processing,which has gradually exposed certain security risks.Existing deep learning algorithms can not effectively describe the essential characteristics of data or its inherent causal relationship.When the algorithm faces malicious input,it often fails to give correct judgment results.Based on the current security threats of deep learning,the adversarial example problem and its characteristics in deep learning applications were introduced,hypotheses on the existence of adversarial examples were summarized,classic adversarial example construction methods were reviewed and recent research status in different scenarios were summarized,several defense techniques in different processes were compared,and finally the development trend of adversarial example research were forecasted.
Keywords:adversarial example  deep learning  security threat  defense technology
本文献已被 维普 等数据库收录!
点击此处可从《》浏览原始摘要信息
点击此处可从《》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号