首页 | 本学科首页   官方微博 | 高级检索  
     

支持数据隐私保护的联邦深度神经网络模型研究
引用本文:张泽辉,富瑶,高铁杠.支持数据隐私保护的联邦深度神经网络模型研究[J].自动化学报,2022,48(5):1273-1284.
作者姓名:张泽辉  富瑶  高铁杠
作者单位:1. 南开大学软件学院 天津 300071
基金项目:国家科技重大专项(2018YFB0204304);
摘    要:近些年, 人工智能技术已经在图像分类、目标检测、语义分割、智能控制以及故障诊断等领域得到广泛应用, 然而某些行业(例如医疗行业)由于数据隐私的原因, 多个研究机构或组织难以共享数据训练联邦学习模型. 因此, 将同态加密(Homomorphic encryption, HE)算法技术引入到联邦学习中, 提出一种支持数据隐私保护的联邦深度神经网络模型(Privacy-preserving federated deep neural network, PFDNN). 该模型通过对其权重参数的同态加密保证了数据的隐私性, 并极大地减少了训练过程中的加解密计算量. 通过理论分析与实验验证, 所提出的联邦深度神经网络模型具有较好的安全性, 并且能够保证较高的精度.

关 键 词:联邦学习    深度学习    数据隐私    同态加密    神经网络
收稿时间:2020-04-21

Research on Federated Deep Neural Network Model for Data Privacy Preserving
Affiliation:1. College of Software, Nankai University, Tianjin 300071
Abstract:In recent years, artificial intelligence technology has been widely used in the fields of image classification, object detection, semantic segmentation, intelligent control and fault diagnosis, etc.. However, in some industries, such as medical, it is difficult that multiple research institutions share data to train federated learning models due to data privacy. Therefore, homomorphic encryption (HE) algorithm technology is introduced into federated learning in this paper. A privacy-preserving federated deep neural network (PFDNN) model is proposed, which preserves data privacy by homomorphic encryption of model parameters and significantly reduces the amount of computation required for encryption and decryption. Theoretical analysis and experimental results show that the proposed federated deep neural network model has better security and can guarantee higher accuracy.
Keywords:
点击此处可从《自动化学报》浏览原始摘要信息
点击此处可从《自动化学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号