首页 | 本学科首页   官方微博 | 高级检索  
     

基于联邦学习的本地化差分隐私机制研究
引用本文:任一支,刘容轲,王冬,袁理锋,申延召,吴国华,王秋华,杨昌天.基于联邦学习的本地化差分隐私机制研究[J].电子与信息学报,2023,45(3):784-792.
作者姓名:任一支  刘容轲  王冬  袁理锋  申延召  吴国华  王秋华  杨昌天
作者单位:1.杭州电子科技大学网络空间安全学院 杭州 3100182.山东区块链研究院 济南 250000
基金项目:浙江省“尖兵”、“领雁”研发攻关项目(2022C03174),浙江省教育厅科研项目(Y202147115),浙江省属高校基本科研业务费专项资金资助项目(GK229909299001-023)
摘    要:联邦学习与群体学习作为当前热门的分布式机器学习范式,前者能够保护用户数据不被第三方获得的前提下在服务器中实现模型参数共享计算,后者在无中心服务器的前提下利用区块链技术实现所有用户同等地聚合模型参数。但是,通过分析模型训练后的参数,如深度神经网络训练的权值,仍然可能泄露用户的隐私信息。目前,在联邦学习下运用本地化差分隐私(LDP)保护模型参数的方法层出不穷,但皆难以在较小的隐私预算和用户数量下缩小模型测试精度差。针对此问题,该文提出正负分段机制(PNPM),在聚合前对本地模型参数进行扰动。首先,证明了该机制满足严格的差分隐私定义,保证了算法的隐私性;其次分析了该机制能够在较少的用户数量下保证模型的精度,保证了机制的有效性;最后,在3种主流图像分类数据集上与其他最先进的方法在模型准确性、隐私保护方面进行了比较,表现出了较好的性能。

关 键 词:隐私保护    联邦学习    本地化差分隐私    区块链
收稿时间:2022-08-12

A Study of Local Differential Privacy Mechanisms Based on Federated Learning
REN Yizhi,LIU Rongke,WANG Dong,YUAN Lifeng,SHEN Yanzhao,WU Guohua,WANG Qiuhua,YANG Changtian.A Study of Local Differential Privacy Mechanisms Based on Federated Learning[J].Journal of Electronics & Information Technology,2023,45(3):784-792.
Authors:REN Yizhi  LIU Rongke  WANG Dong  YUAN Lifeng  SHEN Yanzhao  WU Guohua  WANG Qiuhua  YANG Changtian
Affiliation:1.School of Cyberspace Security, Hangzhou Dianzi University, Hangzhou 310018, China2.Shandong Blockchain Research Institute, Jinan 250000, China
Abstract:Federated Learning and swarm Learning, as currently popular distributed machine learning paradigms, the former enables shared computation of model parameters in servers while protecting user data from third parties, while the latter uses blockchain technology to aggregate model parameters equally for all users without a central server. However, by analyzing the parameters after model training, such as the weights of deep neural network training, it is still possible to leak the user's private information. At present, there are several methods for protecting model parameters utilizing Local Differential Privacy (LDP) in federated learning, however it is challenging to reduce the gap in model testing accuracy when there is a limited privacy budget and user base. To solve this problem, a Positive and Negative Piecewise Mechanism (PNPM) is proposed, which perturbs the local model parameters before aggregation. First, it is proved that the mechanism satisfies the strict definition of differential privacy and ensures the privacy of the algorithm; Secondly, it is analyzed that the mechanism can ensure the accuracy of the model under a small number of users and ensure the effectiveness of the mechanism; Finally, it is compared with other state-of-the-art methods in terms of model accuracy and privacy protection on three mainstream image classification datasets and shows a better performance.
Keywords:
点击此处可从《电子与信息学报》浏览原始摘要信息
点击此处可从《电子与信息学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号