首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
边缘计算在自动驾驶的环境感知和数据处理方面有着极其重要的应用.自动驾驶汽车可以通过从边缘节点获得环境信息来扩大自身的感知范围,也可以向边缘节点卸载计算任务以解决计算资源不足的问题.相比于云计算,边缘计算避免了长距离数据传输所导致的高时延,能给自动驾驶车辆提供更快速的响应,并且降低了主干网络的负载.基于此,首先介绍了基于...  相似文献   

2.
当前物联网(IoT)应用的快速增长对用户设备的计算能力是一个巨大的挑战。雾计算(FC)网络可为用户设备提供近距离、快速的计算服务,为资源紧张,计算能力有限的用户设备提供了解决方案。该文提出一个基于区块链的雾网络模型,该模型中用户设备可以将计算密集型任务卸载到计算能力强的节点处理。为最小化任务处理时延和能耗,引入两种任务卸载模型,即设备到设备(D2D)协作群组任务卸载和雾节点(FNs)任务卸载。此外,针对雾计算网络任务卸载过程的数据安全问题,引入区块链技术构建去中心化分布式账本,防止恶意节点修改交易信息,实现数据安全可靠传输。为降低共识机制时延和能耗,提出了改进的基于投票的委托权益证明(DPoS)共识机制,得票数超过阈值的FNs组成验证集,验证集中的FN轮流作为管理者生成新区块。最后,以最小化网络成本为目标,联合优化任务卸载决策、传输速率分配和计算资源分配,提出任务卸载决策和资源分配(TODRA)算法进行求解,并通过仿真实验验证了该算法的有效性。  相似文献   

3.
针对车联网的多媒体业务快速增长,大量的数据交换为移动网络带来沉重负担的问题,构建了一种车联网中基于移动边缘计算的V2X协同缓存和资源分配框架.提出了V2X协同缓存与资源分配机制,实现网络内计算、缓存和通信资源的有效分配;利用图着色模型为卸载用户分配信道;采用拉格朗日乘子法对功率与计算资源进行分配.仿真结果表明,在不同的...  相似文献   

4.
The development of communication technologies which support traffic-intensive applications presents new challenges in designing a real-time traffic analysis architecture and an accurate method that suitable for a wide variety of traffic types.Current traffic analysis methods are executed on the cloud,which needs to upload the traffic data.Fog computing is a more promising way to save bandwidth resources by offloading these tasks to the fog nodes.However,traffic analysis models based on traditional machine learning need to retrain all traffic data when updating the trained model,which are not suitable for fog computing due to the poor computing power.In this study,we design a novel fog computing based traffic analysis system using broad learning.For one thing,fog computing can provide a distributed architecture for saving the bandwidth resources.For another,we use the broad learning to incrementally train the traffic data,which is more suitable for fog computing because it can support incremental updates of models without retraining all data.We implement our system on the Raspberry Pi,and experimental results show that we have a 98%probability to accurately identify these traffic data.Moreover,our method has a faster training speed compared with Convolutional Neural Network(CNN).  相似文献   

5.
In order to achieve the best balance between latency,computational rate and energy consumption,for a edge access network of IoV,a distribution offloading algorithm based on deep Q network (DQN) was considered.Firstly,these tasks of different vehicles were prioritized according to the analytic hierarchy process (AHP),so as to give different weights to the task processing rate to establish a relationship model.Secondly,by introducing edge computing based on DQN,the task offloading model was established by making weighted sum of task processing rate as optimization goal,which realized the long-term utility of strategies for offloading decisions.The performance evaluation results show that,compared with the Q-learning algorithm,the average task processing delay of the proposed method can effectively improve the task offload efficiency.  相似文献   

6.
为了应对车联网中计算资源密集、可分离型任务的卸载环境动态变化和不同协同节点通信、计算资源存在差异的问题,提出了一种在V2X下多协同节点串行卸载、并行计算的分布式卸载策略。该策略利用车辆可预测的行驶轨迹,对任务进行不等拆分,分布式计算于本地、MEC及协同车辆,建立系统时延最小化的优化问题。为求解该优化问题,设计了博弈论的卸载机制,以实现协同节点串行卸载的执行顺序;鉴于车联网的动态时变特性,利用序列二次规划算法,给出了最优的任务不等拆分。仿真结果表明,所提策略能够有效减少计算任务系统时延,且当多协同节点分布式卸载服务时,所提策略在不同的参数条件下仍然能够保持稳定的系统性能。  相似文献   

7.
为了应对车联网中计算资源密集、可分离型任务的卸载环境动态变化和不同协同节点通信、计算资源存在差异的问题,提出了一种在V2X下多协同节点串行卸载、并行计算的分布式卸载策略。该策略利用车辆可预测的行驶轨迹,对任务进行不等拆分,分布式计算于本地、MEC及协同车辆,建立系统时延最小化的优化问题。为求解该优化问题,设计了博弈论的卸载机制,以实现协同节点串行卸载的执行顺序;鉴于车联网的动态时变特性,利用序列二次规划算法,给出了最优的任务不等拆分。仿真结果表明,所提策略能够有效减少计算任务系统时延,且当多协同节点分布式卸载服务时,所提策略在不同的参数条件下仍然能够保持稳定的系统性能。  相似文献   

8.
The cloud computing systems, such as the Internet of Things (IoT), are usually introduced with a three-layer architecture (IoT-Fog-Cloud) for the task offloading that is a solution to compensate for resource constraints in these systems. Offloading at the right location is the most significant challenge in this field. It is more appropriate to offload tasks to fog than to cloud based on power and performance metrics, but its resources are more limited than the resources of the cloud. This paper tries to optimize these factors in the fog by specifying the number of usable servers in the fog. For this purpose, we model a fog computing system using the queueing theory. Furthermore, binary search and reinforcement learning algorithms are proposed to determine the minimum number of servers with the lowest power consumption. We evaluate the cost of the fog in different scenarios. By solving the model, we find that the proposed dispatching policy is very flexible and outperformed the known policies by up to 31% and in no case is it worse than either of them, and the overall offloading cost increases when fog rejects tasks with a high probability. Our offloading method is more effective than running all fog servers simultaneously, based on simulation results. It is evident from the similarities between the simulation results and those derived from the analytical method that the model and results are valid.  相似文献   

9.
With the widespread application of wireless communication technology and continuous improvements to Internet of Things (IoT) technology, fog computing architecture composed of edge, fog, and cloud layers have become a research hotspot. This architecture uses Fog Nodes (FNs) close to users to implement certain cloud functions while compensating for cloud disadvantages. However, because of the limited computing and storage capabilities of a single FN, it is necessary to offload tasks to multiple cooperating FNs for task completion. To effectively and quickly realize task offloading, we use network calculus theory to establish an overall performance model for task offloading in a fog computing environment and propose a Globally Optimal Multi-objective Optimization algorithm for Task Offloading (GOMOTO) based on the performance model. The results show that the proposed model and algorithm can effectively reduce the total delay and total energy consumption of the system and improve the network Quality of Service (QoS).  相似文献   

10.
在车联网中引入V2V计算卸载技术可以缓解当前车载计算卸载热点地区路边单元(RSU)计算资源不足的问题.然而,在计算卸载过程中,服务车辆可能因故障离组或自主选择离开车组.如何返回任务结果并高效地分配计算任务是需要进一步研究的关键问题.提出了一个车组内计算任务分配算法,考虑了可能导致车辆离开车组的因素影响,以及组中每辆车能...  相似文献   

11.
In order to improve the efficiency of tasks processing and reduce the energy consumption of new energy vehicle (NEV), an adaptive dual task offloading decision-making scheme for Internet of vehicles is proposed based on information-assisted service of road side units (RSUs) and task offloading theory. Taking the roadside parking space recommendation service as the specific application Scenario, the task offloading model is built and a hierarchical self-organizing network model is constructed, which utilizes the computing power sharing among nodes, RSUs and mobile edge computing (MEC) servers. The task scheduling is performed through the adaptive task offloading decision algorithm, which helps to realize the available parking space recommendation service which is energy-saving and environmental-friendly. Compared with these traditional task offloading decisions, the proposed scheme takes less time and less energy in the whole process of tasks. Simulation results testified the effectiveness of the proposed scheme.  相似文献   

12.
针对如何基于有限的系统剩余资源进行任务优化卸载以增加移动终端的数字货币收益问题,该文在融合区块链与雾计算系统中提出一种基于节点剩余资源、网络时延的任务卸载方案。为了实现任务的优化卸载,首先基于任务量对移动终端的预期收益进行了分析,其次基于网络节点剩余计算资源、存储资源、功率资源、网络时延联合分析了移动终端的支出。此后以最大化移动终端的数字货币收益为优化目标建立了数学优化模型,并利用模拟退火(SA)算法对优化模型进行求解。仿真结果证明上述方案的有效性。  相似文献   

13.
在车联网(IOV)环境中,如果将车辆的计算任务都放置在云平台执行,无法满足对于信息处理的实时性,考虑移动边缘计算技术以及任务卸载策略,将用户的计算任务卸载到靠近设备边缘的服务器去执行。但是在密集的环境下,如果所有的任务都卸载到附近的边缘服务器去执行,同样会给边缘服务器带来巨大的负载。该文提出基于模拟退火机制的车辆用户移动边缘计算任务卸载新方法,通过定义用户的任务计算卸载效用,综合考虑时耗和能耗,结合模拟退火机制,根据当前道路的密集程度对系统卸载效用进行优化,改变用户的卸载决策,选择在本地执行或者卸载到边缘服务器上执行,使得在给定的环境下的所有用户都能得到满足低时延高质量的服务。仿真结果表明,该算法在减少用户任务计算时间的同时降低了能量消耗。  相似文献   

14.
Chen  Siguang  Ge  Xinwei  Wang  Qian  Miao  Yifeng  Ruan  Xiukai 《Wireless Networks》2022,28(7):3293-3304

In view of the existing computation offloading research on fog computing network scenarios, most scenarios focus on reducing energy consumption and delay and lack the joint consideration of smart device rechargeability. This paper proposes a deep deterministic policy gradient-based intelligent rechargeable fog computation offloading mechanism that is combined with simultaneous wireless information and power transfer technology. Specifically, an optimization problem that minimizes the total energy consumption for completing all tasks in a multiuser scenario is formulated, and the joint optimization of the task offloading ratio, uplink channel bandwidth, power split ratio and computing resource allocation is fully considered. Based on the above nonconvex optimization problem with a continuous action space, a communication, computation and energy harvesting co-aware intelligent computation offloading algorithm is developed. It can achieve the optimal energy consumption and delay, and similar to a double deep Q-network, an inverting gradient updating-based dual actor-critic neural network design can improve the convergence and stability of the training process. Finally, the simulation results validate that the proposed mechanism can converge quickly and can effectively reduce the energy consumption with the lowest task delay.

  相似文献   

15.
针对无线供能移动边缘计算(MEC)网络,该文将计算时延定义为数据卸载与计算所消耗的时间,并提出一种节点计算时延之和最小化的多维资源分配方法。首先,在节点能量因果约束下,通过联合优化专用能量站工作时长、任务分割系数、节点计算频率和发射功率来建立一个计算时延之和最小化的多维资源分配问题。由于存在优化变量耦合与max-max函数,所建问题非凸且无法采用凸优化工具获取最优解。为此,通过引入一系列松弛变量和辅助变量来进行优化问题简化以及优化变量解耦,并在此基础上,通过深入分析简化问题的结构特性,提出一种基于二分法的迭代算法来求解原问题的最优解。最后,计算机仿真验证了所提迭代算法的正确性以及所提资源分配方法在计算时延方面的优越性。  相似文献   

16.
针对云计算应用于无线传感器网络(Wireless Sensor Network,WSN)时延敏感型业务时存在的高传输时延问题,提出了一种WSN低功耗低时延路径式协同计算方法.该方法基于一种云雾网络架构开展研究,该架构利用汇聚节点组成雾计算层;在数据传输过程中基于雾计算层的计算能力分步骤完成任务计算,降低任务处理时延;由...  相似文献   

17.
现有车载应用设备对时延有更严苛的要求,车载边缘计算(VEC)能够充分利用网络边缘设备,如路边单元(RSU)进行协作处理,可有效地降低时延。现有研究多假设RSU计算资源充足,可提供无限的服务,但实际其计算资源会随着所需处理任务数量的增加而受限,对时延敏感的车载应用造成限制。该文针对此问题,提出一种车载边缘计算中多任务部分卸载方案,该方案在充分利用RSU的计算资源条件下,考虑邻近车辆的剩余可用计算资源,以最小化总任务处理时延。首先在时延限制和资源约束下分配各任务在本地、RSU和邻近车辆的最优卸载决策变量比例,其次以最小处理时延为目的在一跳通信范围内选择合适的空闲车辆作为处理部分任务的邻近车辆。仿真结果表明所提车载边缘计算中多任务部分卸载方案相较现有方案能较好地降低时延。  相似文献   

18.
In this paper, we study a UAV-based fog or edge computing network in which UAVs and fog/edge nodes work together intelligently to provide numerous benefits in reduced latency, data offloading, storage, coverage, high throughput, fast computation, and rapid responses. In an existing UAV-based computing network, the users send continuous requests to offload their data from the ground users to UAV–fog nodes and vice versa, which causes high congestion in the whole network. However, the UAV-based networks for real-time applications require low-latency networks during the offloading of large volumes of data. Thus, the QoS is compromised in such networks when communicating in real-time emergencies. To handle this problem, we aim to minimize the latency during offloading large amounts of data, take less computing time, and provide better throughput. First, this paper proposed the four-tier architecture of the UAVs–fog collaborative network in which local UAVs and UAV–fog nodes do smart task offloading with low latency. In this network, the UAVs act as a fog server to compute data with the collaboration of local UAVs and offload their data efficiently to the ground devices. Next, we considered the Q-learning Markov decision process (QLMDP) based on the optimal path to handle the massive data requests from ground devices and optimize the overall delay in the UAV-based fog computing network. The simulation results show that this proposed collaborative network achieves high throughput, reduces average latency up to 0.2, and takes less computing time compared with UAV-based networks and UAV-based MEC networks; thus, it can achieve high QoS.  相似文献   

19.
针对车载环境下有限的网络资源和大量用户需求之间的矛盾,提出了智能驱动的车载边缘计算网络架构,以实现网络资源的全面协同和智能管理.基于该架构,设计了任务卸载和服务缓存的联合优化机制,对用户任务卸载以及计算和缓存资源的调度进行了建模.鉴于车载网络的动态、随机和时变的特性,利用异步分布式强化学习算法,给出了最优的卸载决策和资...  相似文献   

20.
李斌  徐天成 《电讯技术》2023,63(12):1894-1901
针对具有依赖关系的计算密集型应用任务面临的卸载决策难题,提出了一种基于优先级的深度优先搜索调度策略。考虑到用户能量受限和移动性,构建了一种联合用户下行能量捕获和上行计算任务卸载的网络模型,并在此基础上建立了端到端优化目标函数。结合任务优先级及时延约束,利用深度强化学习自学习的优势,将任务卸载决策问题建模为马尔科夫模型,并设计了基于任务相关性的Dueling Double DQN(D3QN)算法对问题进行求解。仿真数据表明,所提算法较其他算法能够满足更多用户的时延要求,并能减少9%~10%的任务执行时延。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号