首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
现有的联邦学习模型同步方法大多基于单层的参数服务器架构,难以适应当前异构无线网络场景,同时存在单点通信负载过重、系统延展性差等问题.针对这些问题,文中提出了一种面向边缘混合无线网络的联邦学习高效模型同步方法.在混合无线网络环境中,边缘移动终端将本地模型传输给附近的小型基站,小型基站收到边缘移动终端模型后执行聚合算法,并...  相似文献   

2.
联邦学习是一种能够保护数据隐私的机器学习设置,然而高昂的通信成本和客户端的异质性问题阻碍了联邦学习的规模化落地。针对这两个问题,提出一种面向通信成本优化的联邦学习算法。首先,服务器接收来自客户端的生成模型并生成模拟数据;然后,服务器利用模拟数据训练全局模型并将其发送给客户端,客户端利用全局模型进行微调后得到最终模型。所提算法仅需要客户端与服务器之间的一轮通信,并且利用微调客户端模型来解决客户端异质性问题。在客户端数量为20个时,在MNIST和CIFAR-10这两个数据集上进行了实验。结果表明,所提算法能够在保证准确率的前提下,在MNIST数据集上将通信的数据量减少至联邦平均(FedAvg)算法的1/10,在CIFAR-10数据集上将通信数据量减少至FedAvg算法的1/100。  相似文献   

3.
物联网多样性终端设备在计算、存储、通信方面的异构性导致联邦学习效率不足。针对上述联邦训练过程中面临的问题,基于代理选举思路,提出了一种高效联邦学习算法。设计了基于马氏距离的代理节点选举策略,将设备的计算能力与闲置时长作为选举因素,选举性价比高的设备作为代理节点,充分发挥设备计算能力。进一步设计了基于代理节点的新型云边端联邦学习架构,提升了异构设备之间的联邦学习效率。基于MNIST和CIFAR-10公开数据集与智能家居设备真实数据的实验表明,该联邦学习方法的效率提高了22%。  相似文献   

4.
联邦学习是一种分布式机器学习方法,它将数据保留在本地,仅将计算结果上传到客户端,从而提高了模型传递与聚合的效率和安全性.然而,联邦学习面临的一个重要挑战是,上传的模型大小日益增加,大量参数多次迭代,给通信能力不足的小型设备带来了困难.因此在本文中,客户端和服务器被设置为仅一次的互相通信机会.联邦学习中的另一个挑战是,客户端之间的数据规模并不相同.在不平衡数据场景下,服务器的模型聚合将变得低效.为了解决这些问题,本文提出了一个仅需一轮通信的轻量级联邦学习框架,在联邦宽度学习中设计了一种聚合策略算法,即FBL-LD.算法在单轮通信中收集可靠的模型并选出主导模型,通过验证集合理地调整其他模型的参与权重来泛化联邦模型. FBL-LD利用有限的通信资源保持了高效的聚合.实验结果表明, FBL-LD相比同类联邦宽度学习算法具有更小的开销和更高的精度,并且对不平衡数据问题具有鲁棒性.  相似文献   

5.
郭棉  张锦友 《计算机应用》2021,41(9):2639-2645
针对物联网(IoT)数据源的多样化、数据的非独立同分布性、边缘设备计算能力和能耗的异构性,提出一种集中学习和联邦学习共存的移动边缘计算(MEC)网络计算迁移策略。首先,建立与集中学习、联邦学习都关联的计算迁移系统模型,考虑了集中学习、联邦学习模型产生的网络传输延迟、计算延迟以及能耗;然后,以系统平均延迟为优化目标、以能耗和基于机器学习准确率的训练次数为限制条件构建面向机器学习的计算迁移优化模型。接着对所述计算迁移进行了博弈分析,并基于分析结果提出一种能量约束的延迟贪婪(ECDG)算法,通过延迟贪婪决策和能量约束决策更新二阶优化来获取模型的优化解。与集中式贪婪算法和面向联邦学习的客户选择(FedCS)算法相比,ECDG算法的平均学习延迟最低,约为集中式贪婪算法的1/10,为FedCS算法的1/5。实验结果表明,ECDG算法能通过计算迁移自动为数据源选择最优的机器学习模型,从而有效降低机器学习的延迟,提高边缘设备的能效,满足IoT应用的服务质量(QoS)要求。  相似文献   

6.
The rapid development of network communication along with the drastic increase in the number of smart devices has triggered a surge in network traffic, which can contain private data and in turn affect user privacy. Recently, Federated Learning (FL) has been proposed in Intrusion Detection Systems (IDS) to ensure attack detection, privacy preservation, and cost reduction, which are crucial issues in traditional centralized machine-learning-based IDS. However, FL-based approaches still exhibit vulnerabilities that can be exploited by adversaries to compromise user data. At the same time, meta-models (including the blending models) have been recognized as one of the solutions to improve generalization for attack detection and classification since they enhance generalization and predictive performances by combining multiple base models. Therefore, in this paper, we propose a Federated Blending model-driven IDS framework for the Internet of Things (IoT) and Industrial IoT (IIoT), called F-BIDS, in order to further protect the privacy of existing ML-based IDS. The proposition consists of a Decision Tree (DT) and Random Forest (RF) as base classifiers to first produce the meta-data. Then, the meta-classifier, which is a Neural Networks (NN) model, uses the meta-data during the federated training step, and finally, it makes the final classification on the test set. Specifically, in contrast to the classical FL approaches, the federated meta-classifier is trained on the meta-data (composite data) instead of user-sensitive data to further enhance privacy. To evaluate the performance of F-BIDS, we used the most recent and open cyber-security datasets, called Edge-IIoTset (published in 2022) and InSDN (in 2020). We chose these datasets because they are recent datasets and contain a large amount of network traffic including both malicious and benign traffic.  相似文献   

7.
智能城市、智慧工厂等对物联网设备(Internet of Things,IoT)的性能和连接性提出了挑战。边缘计算的出现弥补了这些能力受限的设备,通过将密集的计算任务从它们迁移到边缘节点(Edge Node,EN),物联网设备能够在节约更多能耗的同时,仍保持服务质量。计算卸载决策涉及协作和复杂的资源管理,应该根据动态工作负载和网络环境实时确定计算卸载决策。采用模拟实验的方法,通过在物联网设备和边缘节点上都部署深度强化学习代理来最大化长期效用,并引入联盟学习来分布式训练深度强化学习代理。首先构建支持边缘计算的物联网系统,IoT从EN处下载已有模型进行训练,密集型计算任务卸载至EN进行训练;IoT上传更新的参数至EN,EN聚合该参数与EN处的模型得到新的模型;云端可在EN处获得新的模型并聚合,IoT也可以从EN获得更新的参数应用在设备上。经过多次迭代,该IoT能获得接近集中式训练的性能,并且降低了物联网设备和边缘节点之间的传输成本,实验证实了决策方案和联盟学习在动态物联网环境中的有效性。  相似文献   

8.
联邦学习通过聚合客户端训练的模型,保证数据留在客户端本地,从而保护用户隐私.由于参与训练的设备数目庞大,存在数据非独立同分布和通信带宽受限的情况.因此,降低通信成本是联邦学习的重要研究方向.梯度压缩是提升联邦学习通信效率的有效方法,然而目前常用的梯度压缩方法大多针对独立同分布的数据,未考虑联邦学习的特性.针对数据非独立同分布的联邦场景,本文提出了基于投影的稀疏三元压缩算法,通过在客户端和服务端进行梯度压缩,降低通信成本,并在服务端采用梯度投影的聚合策略以缓解客户端数据非独立同分布导致的不利影响.实验结果表明,本文提出的算法不仅提升了通信效率,而且在收敛速度和准确率上均优于现有的梯度压缩算法.  相似文献   

9.
联邦学习作为一种具有隐私保护的新兴分布式计算范式,在一定程度上保护了用户隐私和数据安全。然而,由于联邦学习系统中客户端与服务器需要频繁地交换模型参数,造成了较大的通信开销。在带宽有限的无线通信场景中,这成为了限制联邦学习发展的主要瓶颈。针对这一问题,提出了一种基于Z-Score的动态稀疏压缩算法。通过引入Z-Score,对局部模型更新进行离群点检测,将重要的更新值视为离群点,从而将其挑选出来。在不需要复杂的排序算法以及原始模型更新的先验知识的情况下,实现模型更新的稀疏化。同时随着通信轮次的增加,根据全局模型的损失值动态地调整稀疏率,从而在保证模型精度的前提下最大程度地减少总通信量。通过实验证明,在I.I.D。数据场景下,该算法与联邦平均(FedAvg)算法相比可以降低95%的通信量,精度损失仅仅为1.6%,与FTTQ算法相比可以降低40%~50%的通信量,精度损失仅为1.29%,证明了该方法在保证模型性能的同时显著降低了通信成本。  相似文献   

10.
Internet of Vehicles (IoVs) consists of smart vehicles, Autonomous Vehicles (AVs) as well as roadside units (RSUs) that communicate wirelessly to provide enhanced transportation services such as improved traffic efficiency and reduced traffic congestion and accidents. Unfortunately, current IoV networks suffer from security, privacy, and trust issues. Blockchain technology emerged as a decentralized approach for enhanced security without depending on trusted third parties to run services. Blockchain offers the benefits of trustworthiness and immutability and mitigates the problem of a single point of failure and other attacks. In this work, we present the state-of-the-art Blockchain-enabled IoVs (BIoVs) with a particular focus on their applications, such as crowdsourcing-based applications, energy trading, traffic congestion reduction, collision, accident avoidance, infotainment, and content caching. We also present in-depth applications of federated learning (FL) for BIoVs. The key challenges of integrating Blockchain with IoV are investigated in several domains, such as edge computing, machine learning, and Federated Learning (FL). Lastly, we present several open issues, challenges, and future opportunities in AI-enabled BIoV, hardware-assisted security for BIoV, and quantum computing attacks on BIoV applications.  相似文献   

11.
联邦学习能够在边缘设备的协作训练中,保护边缘设备的数据隐私。而在通用联邦学习场景中,联邦学习的参与者通常由异构边缘设备构成,其中资源受限的设备会占用更长的时间,导致联邦学习系统的训练速度下降。现有方案或忽略掉队者,或根据分布式思想将计算任务进行分发,但是分发过程中涉及到原始数据的传递,无法保证数据隐私。为了缓解中小型规模的多异构设备协作训练场景下的掉队者问题,提出了编码联邦学习方案,结合线性编码的数学特性设计了高效调度算法,在确保数据隐私的同时,加速异构系统中联邦学习系统速度。同时,在实际实验平台中完成的实验结果表明,当异构设备之间性能差异较大时,编码联邦学习方案能将掉队者训练时间缩短92.85%。  相似文献   

12.
为了解决数据共享需求与隐私保护要求之间不可调和的矛盾,联邦学习应运而生.联邦学习作为一种分布式机器学习,其中的参与方与中央服务器之间需要不断交换大量模型参数,而这造成了较大通信开销;同时,联邦学习越来越多地部署在通信带宽有限、电量有限的移动设备上,而有限的网络带宽和激增的客户端数量会使通信瓶颈加剧.针对联邦学习的通信瓶...  相似文献   

13.
联邦学习是一种革命性的深度学习模式,可以保护用户不暴露其私有数据,同时合作训练全局模型。然而某些客户端的恶意行为会导致单点故障以及隐私泄露的风险,使得联邦学习的安全性面临极大挑战。为了解决上述安全问题,在现有研究的基础上提出了一种区块链赋能多边缘联邦学习模型。首先,通过融合区块链替代中心服务器来增强模型训练过程的稳定性与可靠性;其次,提出了基于边缘计算的共识机制,以实现更加高效的共识流程;此外,将声誉评估融入到联邦学习训练流程中,能够透明地衡量每一个参与者的贡献值,规范工作节点的行为。最后通过对比实验证明,所提方案在恶意环境下仍然能够保持较高的准确度,与传统的联邦学习算法相比,该方案能够抵抗更高的恶意比例。  相似文献   

14.
柏财通  崔翛龙  李爱 《计算机工程》2022,48(10):103-109
当联邦学习(FL)算法应用于鲁棒语音识别任务时,为解决训练数据非独立同分布(Non-IID)与客户端模型缺乏个性化问题,提出基于个性化本地蒸馏的联邦学习(PLD-FLD)算法。客户端通过上行链路上传本地Logits并在中心服务器聚合后下传参数,当边缘端模型测试性能优于本地模型时,利用下载链路接收中心服务器参数,确保了本地模型的个性化与泛化性,同时将模型参数与全局Logits通过下行链路下传至客户端,实现本地蒸馏学习,解决了训练数据的Non-IID问题。在AISHELL与PERSONAL数据集上的实验结果表明,PLD-FLD算法能在模型性能与通信成本之间取得较好的平衡,面向军事装备控制任务的语音识别准确率高达91%,相比于分布式训练的FL和FLD算法具有更快的收敛速度和更强的鲁棒性。  相似文献   

15.
The increasing data produced by IoT devices and the need to harness intelligence in our environments impose the shift of computing and intelligence at the edge, leading to a novel computing paradigm called Edge Intelligence/Edge AI. This paradigm combines Artificial Intelligence and Edge Computing, enables the deployment of machine learning algorithms to the edge, where data is generated, and is able to overcome the drawbacks of a centralized approach based on the cloud (e.g., performance bottleneck, poor scalability, and single point of failure). Edge AI supports the distributed Federated Learning (FL) model that maintains local training data at the end devices and shares only globally learned model parameters in the cloud. This paper proposes a novel, energy-efficient, and dynamic FL-based approach considering a hierarchical edge FL architecture called HED-FL, which supports a sustainable learning paradigm using model parameters aggregation at different layers and considering adaptive learning rounds at the edge to save energy but still preserving the learning model’s accuracy. Performance evaluations of the proposed approach have also been led out considering model accuracy, loss, and energy consumption.  相似文献   

16.
Federated learning (FL) was created with the intention of enabling collaborative training of models without the need for direct data exchange. However, data leakage remains an issue in FL. Multi-Key Fully Homomorphic Encryption (MKFHE) is a promising technique that allows computations on ciphertexts encrypted by different parties. MKFHE’s aptitude to handle multi-party data makes it an ideal tool for implementing privacy-preserving federated learning.We present a multi-hop MKFHE with compact ciphertext. MKFHE allows computations on data encrypted by different parties. In MKFHE, the compact ciphertext means that the size of the ciphertext is independent of the number of parties. The multi-hop property means that parties can dynamically join the homomorphic computation at any time. Prior MKFHE schemes were limited by their inability to combine these desirable properties. To address this limitation, we propose a multi-hop MKFHE scheme with compact ciphertext based on the random sample common reference string(CRS). We construct our scheme based on the residue number system (RNS) variant CKKS17 scheme, which enables efficient homomorphic computation over complex numbers due to the RNS representations of numbers.We construct a round efficient privacy-preserving federated learning based on our multi-hop MKFHE. In FL, there is always the possibility that some clients may drop out during the computation. Previous HE-based FL methods did not address this issue. However, our approach takes advantage of multi-hop MKFHE that users can join dynamically and constructs an efficient federated learning scheme that reduces interactions between parties. Compared to other HE-based methods, our approach reduces the number of interactions during a round from 3 to 2. Furthermore, in situations where some users fail, we are able to reduce the number of interactions from 3 to just 1.  相似文献   

17.
Cloud Computing can be seen as one of the latest major evolution in computing offering unlimited possibility to use ICT in various domains: business, smart cities, medicine, environmental computing, mobile systems, design and implementation of cyber-infrastructures. The recent expansion of Cloud Systems has led to adapting resource management solutions for large number of wide distributed and heterogeneous datacenters. The adaptive methods used in this context are oriented on: self-stabilizing, self-organizing and autonomic systems; dynamic, adaptive and machine learning based distributed algorithms; fault tolerance, reliability, availability of distributed systems. The pay-per-use economic model of Cloud Computing comes with a new challenge: maximizing the profit for service providers, minimizing the total cost for customers and being friendly with the environment.This special issue presents advances in virtual machine assignment and placement, multi-objective and multi-constraints job scheduling, resource management in federated Clouds and in heterogeneous environments, dynamic topology for data distribution, workflow performance improvement, energy efficiency techniques and assurance of Service Level Agreements.  相似文献   

18.
Edge computing is a cloud computing extension where physical computers are installed closer to the device to minimize latency. The task of edge data centers is to include a growing abundance of applications with a small capability in comparison to conventional data centers. Under this framework, Federated Learning was suggested to offer distributed data training strategies by the coordination of many mobile devices for the training of a popular Artificial Intelligence (AI) model without actually revealing the underlying data, which is significantly enhanced in terms of privacy. Federated learning (FL) is a recently developed decentralized profound learning methodology, where customers train their localized neural network models independently using private data, and then combine a global model on the core server together. The models on the edge server use very little time since the edge server is highly calculated. But the amount of time it takes to download data from smartphone users on the edge server has a significant impact on the time it takes to complete a single cycle of FL operations. A machine learning strategic planning system that uses FL in conjunction to minimise model training time and total time utilisation, while recognising mobile appliance energy restrictions, is the focus of this study. To further speed up integration and reduce the amount of data, it implements an optimization agent for the establishment of optimal aggregation policy and asylum architecture with several employees’ shared learners. The main solutions and lessons learnt along with the prospects are discussed. Experiments show that our method is superior in terms of the effective and elastic use of resources.  相似文献   

19.
为解决现有的差分隐私联邦学习算法中使用固定的裁剪阈值和噪声尺度进行训练,从而导致数据隐私泄露、模型精度较低的问题,提出了一种基于差分隐私的分段裁剪联邦学习算法。首先,根据客户端的隐私需求分为隐私需求高和低。对于高隐私需求用户使用自适应裁剪来动态裁剪梯度;而低隐私需求用户则采用比例裁剪。其次根据裁剪后阈值大小自适应地添加噪声尺度。通过实验分析可得,该算法可以更好地保护隐私数据,同时通信代价也低于ADP-FL和DP-FL算法,并且与ADP-FL和DP-FL相比,模型准确率分别提高了2.25%和4.41%。  相似文献   

20.
侯坤池  王楠  张可佳  宋蕾  袁琪  苗凤娟 《计算机应用研究》2022,39(4):1071-1074+1104
联邦学习是一种新型的分布式机器学习方法,可以使得各客户端在不分享隐私数据的前提下共同建立共享模型。然而现有的联邦学习框架仅适用于监督学习,即默认所有客户端数据均带有标签。由于现实中标记数据难以获取,联邦学习模型训练的前提假设通常很难成立。为解决此问题,对原有联邦学习进行扩展,提出了一种基于自编码神经网络的半监督联邦学习模型ANN-SSFL,该模型允许无标记的客户端参与联邦学习。无标记数据利用自编码神经网络学习得到可被分类的潜在特征,从而在联邦学习中提供无标记数据的特征信息来作出自身贡献。在MNIST数据集上进行实验,实验结果表明,提出的ANN-SSFL模型实际可行,在监督客户端数量不变的情况下,增加无监督客户端可以提高原有联邦学习精度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号