首页 | 本学科首页   官方微博 | 高级检索  
     

基于宽度网络架构的单模型主导联邦学习
引用本文:文家宝,陈泯融. 基于宽度网络架构的单模型主导联邦学习[J]. 计算机系统应用, 2024, 33(1): 1-10
作者姓名:文家宝  陈泯融
作者单位:华南师范大学 计算机学院, 广州 510631
基金项目:国家自然科学基金(61872153, 61972288)
摘    要:联邦学习是一种分布式机器学习方法,它将数据保留在本地,仅将计算结果上传到客户端,从而提高了模型传递与聚合的效率和安全性.然而,联邦学习面临的一个重要挑战是,上传的模型大小日益增加,大量参数多次迭代,给通信能力不足的小型设备带来了困难.因此在本文中,客户端和服务器被设置为仅一次的互相通信机会.联邦学习中的另一个挑战是,客户端之间的数据规模并不相同.在不平衡数据场景下,服务器的模型聚合将变得低效.为了解决这些问题,本文提出了一个仅需一轮通信的轻量级联邦学习框架,在联邦宽度学习中设计了一种聚合策略算法,即FBL-LD.算法在单轮通信中收集可靠的模型并选出主导模型,通过验证集合理地调整其他模型的参与权重来泛化联邦模型. FBL-LD利用有限的通信资源保持了高效的聚合.实验结果表明, FBL-LD相比同类联邦宽度学习算法具有更小的开销和更高的精度,并且对不平衡数据问题具有鲁棒性.

关 键 词:联邦学习  宽度网络  单轮通信  隐私保护  机器学习
收稿时间:2023-06-28
修稿时间:2023-07-27

Single Model Dominant Federation Learning Based on Broad Network Architecture
WEN Jia-Bao,CHEN Min-Rong. Single Model Dominant Federation Learning Based on Broad Network Architecture[J]. Computer Systems& Applications, 2024, 33(1): 1-10
Authors:WEN Jia-Bao  CHEN Min-Rong
Affiliation:School of Computer Science, South China Normal University, Guangzhou 510631, China
Abstract:Federated learning is a distributed machine learning approach that enables model delivery and aggregation without compromising the privacy and security of local data. However, federated learning faces a major challenge: the large size of the models and the parameters that need to be communicated multiple times between the client and the server, bringing difficulties for small devices with insufficient communication capability. Therefore, this study set up the client and server to communicate with each other only once. Another challenge in federated learning is the data imbalance among different clients. The model aggregation for servers becomes inefficient in data imbalance. To overcome these challenges, the study proposes a lightweight federated learning framework that requires only one-shot communication between the client and the server. The framework also introduces an aggregation policy algorithm, FBL-LD. The algorithm selects the most reliable and dominant model from the client models in a one-shot communication and adjusts the weights of other models based on a validation set to achieve a generalized federated model. FBL-LD reduces the communication overhead and improves aggregation efficiency. Experimental results show that FBL-LD outperforms existing federated learning algorithms in terms of accuracy and robustness to data imbalance.
Keywords:federated learning  broad network  one-shot communication  privacy protection  machine learning
点击此处可从《计算机系统应用》浏览原始摘要信息
点击此处可从《计算机系统应用》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号