首页 | 本学科首页   官方微博 | 高级检索  
     

提升联邦学习通信效率的梯度压缩算法
引用本文:田金箫.提升联邦学习通信效率的梯度压缩算法[J].计算机系统应用,2022,31(10):199-205.
作者姓名:田金箫
作者单位:西南交通大学 计算机与人工智能学院, 成都 611756
摘    要:联邦学习通过聚合客户端训练的模型, 保证数据留在客户端本地, 从而保护用户隐私. 由于参与训练的设备数目庞大, 存在数据非独立同分布和通信带宽受限的情况. 因此, 降低通信成本是联邦学习的重要研究方向. 梯度压缩是提升联邦学习通信效率的有效方法, 然而目前常用的梯度压缩方法大多针对独立同分布的数据, 未考虑联邦学习的特性. 针对数据非独立同分布的联邦场景, 本文提出了基于投影的稀疏三元压缩算法, 通过在客户端和服务端进行梯度压缩, 降低通信成本, 并在服务端采用梯度投影的聚合策略以缓解客户端数据非独立同分布导致的不利影响. 实验结果表明, 本文提出的算法不仅提升了通信效率, 而且在收敛速度和准确率上均优于现有的梯度压缩算法.

关 键 词:联邦学习  通信效率  非独立同分布数据  梯度压缩
收稿时间:2021/12/29 0:00:00
修稿时间:2022/1/29 0:00:00

Gradient Compression Algorithm for Improving Communication Efficiency of Federated Learning
TIAN Jin-Xiao.Gradient Compression Algorithm for Improving Communication Efficiency of Federated Learning[J].Computer Systems& Applications,2022,31(10):199-205.
Authors:TIAN Jin-Xiao
Affiliation:School of Computer and Artificial Intelligence, Southwest Jiaotong University, Chengdu 611756, China
Abstract:Federated learning protects user privacy by aggregating trained models of the client and thereby keeping the data local on the client. Due to the large numbers of devices participating in training, the data is non-independent and identically distributed (non-IID), and the communication bandwidth is limited. Therefore, reducing communication costs is an important research direction for federated learning. Gradient compression is an effective method of improving the communication efficiency of federated learning. However, most of the commonly used gradient compression methods are for independent and identically distributed data without considering the characteristics of federated learning. For the scene of non-IID data in federated learning, this study proposes a sparse ternary compression algorithm based on projection. The communication cost is reduced by gradient compression on the client and server, and the negative impact of non-IID client data is mitigated by gradient projection aggregation on the server. The experimental results show that the proposed algorithm not only improves communication efficiency but also outperforms the existing gradient compression algorithms in convergence speed and accuracy.
Keywords:federated learning  communication efficiency  non-independent and identically distributed (non-IID) data  gradient compression
点击此处可从《计算机系统应用》浏览原始摘要信息
点击此处可从《计算机系统应用》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号