首页 | 本学科首页   官方微博 | 高级检索  
     


Robust Regularization Design of Graph Neural Networks Against Adversarial Attacks Based on Lyapunov Theory
Affiliation:1.School of Artificial Intelligence,Hebei University of Technology,Tianjin 300401,China;2.School of Computer Science and Engineering of North China Institute of Aerospace Engineering,Langfang 065000,China
Abstract:The robustness of graph neural networks(GNNs)is a critical research topic in deep learning.Many researchers have designed regularization methods to enhance the robustness of neural networks,but there is a lack of theoretical analysis on the principle of robustness.In order to tackle the weakness of current robustness designing methods,this paper gives new insights into how to guarantee the robustness of GNNs.A novel regularization strategy named Lya-Reg is designed to guarantee the robustness of GNNs by Lyapunov theory.Our results give new insights into how regularization can mitigate the various adversarial effects on different graph signals.Extensive experiments on various public datasets demonstrate that the proposed regularization method is more robust than the state-of-the-art methods such as L1-norm,L2-norm,L21-norm,Pro-GNN,PA-GNN and GARNET against various types of graph adversarial attacks.
Keywords:Deep learning  Graph neural network  Robustness  Lyapunov  Regularization
本文献已被 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号