首页 | 本学科首页   官方微博 | 高级检索  
     

针对ASR系统的快速有目标自适应对抗攻击
引用本文:张树栋,高海昌,曹曦文,康帅.针对ASR系统的快速有目标自适应对抗攻击[J].西安电子科技大学学报,2021,48(1):168-175.
作者姓名:张树栋  高海昌  曹曦文  康帅
摘    要:对抗样本是一种恶意输入,通过在输入中添加人眼无法察觉的微小扰动来误导深度学习模型产生错误的输出.近年来,随着对抗样本研究的发展,除了大量图像领域的对抗样本工作,在自动语音识别领域也开始有一些新进展.目前,针对自动语音识别系统的最先进的对抗攻击来自Carlini&Wagner,其方法是通过获得使模型被错误分类的最小扰动来...

关 键 词:深度神经网络  对抗样本  语音识别  机器学习  对抗攻击
收稿时间:2020-08-14

Adaptive fast and targeted adversarial attack for speech recognition
ZHANG Shudong,GAO Haichang,CAO Xiwen,KANG Shuai.Adaptive fast and targeted adversarial attack for speech recognition[J].Journal of Xidian University,2021,48(1):168-175.
Authors:ZHANG Shudong  GAO Haichang  CAO Xiwen  KANG Shuai
Affiliation:School of Computer Science and Technology,Xidian University,Xi’an 710071,China
Abstract:Adversarial examples are malicious inputs designed to induce deep learning models to produce erroneous outputs,which make humans imperceptible by adding small perturbations to the input.Most research on adversarial examples is in the domain of image.Recently,studies on adversarial examples have expanded into the automatic speech recognition domain.The state-of-art attack on the ASR system comes from C &W,which aims to obtain the minimum perturbation that causes the model to be misclassified.However,this method is inefficient since it requires the optimization of two terms of loss functions at the same time,and usually requires thousands of iterations.In this paper,we propose an efficient approach based on strategies that maximize the likelihood of adversarial examples and target categories.A large number of experiments show that our attack achieves better results with fewer iterations.
Keywords:deep neural networks(DNN)  adversarial examples  speech recognition  machine learning  adversarial attacks  
点击此处可从《西安电子科技大学学报》浏览原始摘要信息
点击此处可从《西安电子科技大学学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号