首页 | 本学科首页   官方微博 | 高级检索  
     

基于对抗样本的人工智能反制技术研究进展与趋势
作者姓名:任奎  王骞
作者单位:浙江大学 网络空间安全研究中心,武汉大学 国家网络安全学院
基金项目:国家自然科学基金资助项目(61772236, 61822207)
摘    要:随着基于深度学习的人工智能技术的快速发展及其广泛应用,人们对其安全性的关注也日益凸显。特别是,最近一系列研究表明基于深度学习模型的人工智能系统容易受到对抗样本的攻击。对抗样本通过向正常样本中添加精心设计、人类难以察觉的微小扰动,可导致深度学习模型的严重误判。本文回顾基于对抗性图像和音频两类人工智能反制技术最新进展,并对这些研究成果进行分类和综合比较,最后对现有挑战与未来研究趋势进行了讨论和展望。

关 键 词:对抗样本  人工智能系统  深度学习
收稿时间:2019/3/13 0:00:00
修稿时间:2019/3/13 0:00:00

The Research Progress and Trend of Adversarial Examples for Artificial Intelligence Systems
Authors:REN Kui and WANG Qian
Affiliation:School of Cyber Science and Engineering,Wuhan University
Abstract:With the success of deep learning, which has greatly promoted the development of artificial intelligence techniques and its wide applications, there are exponentially increasing concerns about its security. In particular, recent research has demonstrated that deep learning models are vulnerable to adversarial examples. By adding a carefully-crafted but unperceived perturbation to legitimate examples, adversarial examples can fool a well-performed deep learning model in an unexpected way. In view of this, the current research provides a systematic survey of recent advances of image and audio adversarial examples in artificial intelligence systems, along with a careful taxonomy and comprehensive comparison. Finally, a discussion on the design challenges and prospect of research directions was presented.
Keywords:adversarial examples  artificial intelligence systems  deep learning
点击此处可从《》浏览原始摘要信息
点击此处可从《》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号