首页 | 本学科首页   官方微博 | 高级检索  
     


AdvCapsNet: To defense adversarial attacks based on Capsule networks
Affiliation:1. Department of Electrical Engineering, Ferdowsi University of Mashhad, Mashhad, Iran;2. Department of Electrical Engineering, Quchan University of Technology, Quchan, Iran
Abstract:Convolutional neural networks have achieved the state-of-the-art results across numerous applications, but recent work finds that these models can be easily fooled by adversarial perturbations. This is partially due to gradient calculation instability, which may be amplified throughout network layers (Liao et al., 2018). To address this issue, we propose a novel AdvCapsNet derived from Capsule (Sabour et al., 2017), which utilizes a significantly more complicated non-linearity, to defend against adversarial attacks. In this paper, we focus on the transfer-based black-box adversarial attacks, which are more practical than their white-box counterparts. Specifically, we investigate vanilla Capsule’s robustness and boost its performance by introducing an adversarial loss function as regularization. The weight updating between capsule layers is implemented via dynamic routing regularized by the additional adversarial term. Extensive experiments demonstrate that the proposed AdvCapsNet can significantly boost Capsule’s robustness and that AdvCapsNet is far more resistance to adversarial attacks than alternative baselines, including both CNN- and Capsule-based defense models.
Keywords:Capsule  Adversarial  Defense  Robustness
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号