首页 | 本学科首页   官方微博 | 高级检索  
     

联邦学习深度梯度反演攻防研究进展
引用本文:孙钰, 严宇, 崔剑, 熊高剑, 刘建华. 联邦学习深度梯度反演攻防研究进展[J]. 电子与信息学报, 2024, 46(2): 428-442. doi: 10.11999/JEIT230541
作者姓名:孙钰  严宇  崔剑  熊高剑  刘建华
作者单位:1.北京航空航天大学网络空间安全学院 北京 100191;;2.空天网络安全工业和信息化部重点实验室(北京航空航天大学) 北京 100191
基金项目:国家自然科学基金(32071775)~~;
摘    要:联邦学习作为一种“保留数据所有权,释放数据使用权”的分布式机器学习方法,打破了阻碍大数据建模的数据孤岛。然而,联邦学习在训练过程中只交换梯度而不交换训练数据的特点并不能保证用户训练数据的机密性。近年来新型的深度梯度反演攻击表明,敌手可从共享梯度中重建用户的私有训练数据,从而对联邦学习的私密性产生了严重威胁。随着梯度反演技术的演进,敌手从深层网络恢复大批量原始数据的能力不断增强,甚至对加密梯度的隐私保护联邦学习(PPFL)发起了挑战。而有效的针对性防御方法主要基于扰动变换,旨在混淆梯度、输入或特征以隐藏敏感信息。该文首先指出了隐私保护联邦学习的梯度反演漏洞,并给出了梯度反演威胁模型。之后从攻击范式、攻击能力、攻击对象3个角度对深度梯度反演攻击进行详细梳理。随后将基于扰动变换的防御方法依据扰动对象的不同分为梯度扰动、输入扰动、特征扰动3类,并对各类方法中的代表性工作进行分析介绍。最后,对未来研究工作进行展望。

关 键 词:联邦学习   梯度反演   数据重建   标签恢复   扰动变换
收稿时间:2023-06-01
修稿时间:2023-12-01

Review of Deep Gradient Inversion Attacks and Defenses in Federated Learning
SUN Yu, YAN Yu, CUI Jian, XIONG Gaojian, LIU Jianhua. Review of Deep Gradient Inversion Attacks and Defenses in Federated Learning[J]. Journal of Electronics & Information Technology, 2024, 46(2): 428-442. doi: 10.11999/JEIT230541
Authors:SUN Yu  YAN Yu  CUI Jian  XIONG Gaojian  LIU Jianhua
Affiliation:1. School of Cyber Science and Technology, Beihang University, Beijing 100191, China;;2. Key Laboratory of Ministry of Industry and Information Technology for Cyberspace Security (Beihang University), Beijing 100191, China
Abstract:As a distributed machine learning approach that preserves data ownership while releasing data usage rights, federated learning overcomes the challenge of data silos that hinder large-scale modeling with big data. However, the characteristic of only sharing gradients without training data during the federated training process does not guarantee the confidentiality of users’ training data. In recent years, novel deep gradient inversion attacks have demonstrated the ability of adversaries to reconstruct private training data from shared gradients, which poses a serious threat to the privacy of federated learning. With the evolution of gradient inversion techniques, adversaries are increasingly capable of reconstructing large volumes of data from deep neural networks, which challenges the Privacy-Preserving Federated Learning (PPFL) with encrypted gradients. Effective defenses mainly rely on perturbation transformations to obscure original gradients, inputs, or features to conceal sensitive information. Firstly, the gradient inversion vulnerability in PPFL is highlighted and the threat model in gradient inversion is presented. Then a detailed review of deep gradient inversion attacks is conducted from the perspectives of paradigms, capabilities, and targets. The perturbation-based defenses are divided into three categories according to the perturbed objects: gradient perturbation, input perturbation, and feature perturbation. The representative works in each category are analyzed in detail. Finally, an outlook on future research directions is provided.
Keywords:Federated learning  Gradient inversion  Data reconstruction  Label restoration  Perturbation transformation
点击此处可从《电子与信息学报》浏览原始摘要信息
点击此处可从《电子与信息学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号