首页 | 本学科首页   官方微博 | 高级检索  
     

面向个人信息保护的对抗性图像扰动算法研究
引用本文:王涛,马川,陈淑平.面向个人信息保护的对抗性图像扰动算法研究[J].计算机应用研究,2021,38(8):2543-2548,2555.
作者姓名:王涛  马川  陈淑平
作者单位:河北科技师范学院 工商管理学院,河北 秦皇岛066004;燕山大学信息科学与工程学院,河北 秦皇岛066004;燕山大学图书馆,河北 秦皇岛066004
基金项目:河北省社会科学基金资助项目(HB18SH012)
摘    要:通过研究对抗性图像扰动算法,应对深度神经网络对图像中个人信息的挖掘和发现以保护个人信息安全.将对抗样本生成问题转换为一个含有限制条件的多目标优化问题,考虑神经网络的分类置信度、扰动像素的位置以及色差等目标,利用差分进化算法迭代得到对抗样本.在MNIST和CIFAR-10数据集上,基于深度神经网络LeNet和ResNet进行了对抗样本生成实验,并从对抗成功率、扰动像素数目、优化效果和对抗样本的空间特征等方面进行了对比和分析.结果表明,算法在扰动像素极少的情况下(扰动均值为5)依然可以保证对深度神经网络的有效对抗,并显著优化了扰动像素的位置及色差,达到不破坏原图像的情况下保护个人信息的目的.该研究有助于促进信息技术红利共享与个人信息安全保障之间的平衡,也为对抗样本生成及深度神经网络中分类空间特征的研究提供了技术支撑.

关 键 词:深度学习  神经网络  对抗性图像扰动  稀疏对抗攻击  个人信息保护
收稿时间:2020/12/9 0:00:00
修稿时间:2021/7/7 0:00:00

Research on adversarial image perturbation algorithm for personal information protection
Wang tao,Ma chuan and Chen shuping.Research on adversarial image perturbation algorithm for personal information protection[J].Application Research of Computers,2021,38(8):2543-2548,2555.
Authors:Wang tao  Ma chuan and Chen shuping
Affiliation:Hebei Normal University of Science & Technology,,
Abstract:In order to protect personal information in images, this paper proposed an adversarial image perturbations algorithm to combat deep neural network, which could mine and discover personal image knowledge. It transformed the problem of adversarial example generation into a multi-objective optimization problem with constraints. Considering the classification confidence of the neural network, the location of the perturbed pixels and the chromatic aberration, this paper obtained the adversarial examples iteratively by using the differential evolution algorithm. On MNIST and CIFAR-10 dataset, based on deep neural network LeNet and ResNet, the algorithm generated the experiment of adversarial examples. This paper compared and analyzed the success rate, number of perturbation pixels, optimization effects and spatial characteristics of the adversarial examples. The results show that the proposed algorithm still can effectively combat the deep neural network in the case of few disturbed pixels(the average number of perturbation pixels is 5). The algorithm significantly optimizes the location and chromatic aberration of the perturbed pixels, so as to protect personal information without destroying the original image. This study is helpful to balance the relationship between information technology dividend sharing and personal information security, and provides technical support for the research of adversarial examples generation and classification spatial features in deep neural networks.
Keywords:deep learning  neural network  adversarial image perturbation  sparse adversarial attack  personal information protection
本文献已被 万方数据 等数据库收录!
点击此处可从《计算机应用研究》浏览原始摘要信息
点击此处可从《计算机应用研究》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号