首页 | 本学科首页   官方微博 | 高级检索  
     

基于梯度结构的图神经网络对抗攻击
引用本文:李凝书,关东海,袁伟伟.基于梯度结构的图神经网络对抗攻击[J].计算机系统应用,2023,32(7):276-283.
作者姓名:李凝书  关东海  袁伟伟
作者单位:南京航空航天大学 计算机科学与技术学院, 南京 211106
基金项目:国防基础科研计划(JCKY2020204C009)
摘    要:图神经网络在半监督节点分类任务中取得了显著的性能. 研究表明, 图神经网络容易受到干扰, 因此目前已有研究涉及图神经网络的对抗鲁棒性. 然而, 基于梯度的攻击不能保证最优的扰动. 提出了一种基于梯度和结构的对抗性攻击方法, 增强了基于梯度的扰动. 该方法首先利用训练损失的一阶优化生成候选扰动集, 然后对候选集进行相似性评估, 根据评估结果排序并选择固定预算的修改以实现攻击. 通过在5个数据集上进行半监督节点分类任务来评估所提出的攻击方法. 实验结果表明, 在仅执行少量扰动的情况下, 节点分类精度显著下降, 明显优于现有攻击方法.

关 键 词:图神经网络  节点分类  对抗性攻击  梯度攻击
收稿时间:2022/12/17 0:00:00
修稿时间:2023/2/3 0:00:00

Gradient-structure-based Adversarial Attacks on Graph Neural Network
LI Ning-Shu,GUAN Dong-Hai,YUAN Wei-Wei.Gradient-structure-based Adversarial Attacks on Graph Neural Network[J].Computer Systems& Applications,2023,32(7):276-283.
Authors:LI Ning-Shu  GUAN Dong-Hai  YUAN Wei-Wei
Affiliation:College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
Abstract:Graph neural networks have achieved remarkable performance in semi-supervised node classification tasks. Relevant research has shown that graph neural networks are susceptible to perturbations, and there is research studying the adversarial robustness of graph neural networks. However, gradient-based attacks cannot guarantee optimal perturbation. Therefore, an adversarial attack method based on gradient and structure is proposed to enhance the gradient-based perturbation. The method first generates candidate perturbation sets by using first-order optimization of training losses, and then it evaluates the similarity of the candidate sets. Finally, it ranks them according to the evaluation results and selects a fixed-budget modification to achieve the attack. The proposed attack method is evaluated by performing a semi-supervised node classification task on five datasets. Experimental results show that the node classification accuracy decreases significantly when only a small number of perturbations are performed, which indicates that the proposed method significantly outperforms the existing attack methods.
Keywords:graph neural network (GNN)  node classification  adversarial attacks  gradient attacks
点击此处可从《计算机系统应用》浏览原始摘要信息
点击此处可从《计算机系统应用》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号