首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
移动边缘计算(MEC)通过在无线网络边缘为用户提供计算能力,来提高用户的体验质量。然而,MEC的计算卸载仍面临着许多问题。该文针对超密集组网(UDN)的MEC场景下的计算卸载,考虑系统总能耗,提出卸载决策和资源分配的联合优化问题。首先采用坐标下降法制定了卸载决定的优化方案。同时,在满足用户时延约束下采用基于改进的匈牙利算法和贪婪算法来进行子信道分配。然后,将能耗最小化问题转化为功率最小化问题,并将其转化为一个凸优化问题得到用户最优的发送功率。仿真结果表明,所提出的卸载方案可以在满足用户不同时延的要求下最小化系统能耗,有效地提升了系统性能。  相似文献   

2.
移动边缘计算(MEC)通过将计算任务卸载到MEC服务器上,在缓解智能移动设备计算负载的同时,可以降低服务时延。然而目前在MEC系统中,关于任务卸载和资源分配仍然存在以下问题:1)边缘节点间缺乏协作;2)计算任务到达与实际环境中动态变化的特征不匹配;3)协作式任务卸载和资源分配动态联合优化问题。为解决上述问题,文章在协作式MEC架构的基础上,提出了一种基于多智能体的深度确定性策略梯度算法(MADDPG)的任务卸载和资源分配算法,最小化系统中所有用户的长期平均成本。仿真结果表明,该算法可以有效降低系统的时延及能耗。  相似文献   

3.
由于车载应用的普及和车辆数量的增加,路边基础设施的物理资源有限,当大量车辆接入车联网时能耗与时延同时增加,通过整合内容分发网络(CDN)和移动边缘计算(MEC)的框架可以降低时延与能耗。在车联网中,车辆移动性对云服务的连续性提出了重大挑战。因此,该文提出了移动性管理(MM)来处理该问题。采用开销选择的动态信道分配(ODCA)算法避免乒乓效应且减少车辆在小区间的切换时间。采用基于路边单元(RSU)调度的合作博弈算法进行虚拟机迁移并开发基于学习的价格控制机制,以有效地处理MEC的计算资源。仿真结果表明,所提算法相比于现有的算法能够提高资源利用率且减少开销。  相似文献   

4.
ITU (国际电信联盟)为5G定义了eMBB(增强移动宽带)、m MTC(海量大连接)、URLLC (低时延高可靠)三大应用场景。MEC(多接入边缘计算)对低时延业务的支持能力以及流量和计算卸载的能力,使之成为5G技术中的一个重要特性。通过分析MEC的系统架构、应用场景、关键技术、网络安全等特性,结合边缘DC(数据中心)布局策略,给出MEC落地部署建议。  相似文献   

5.
随着物联网(IoT)迅速发展,移动边缘计算(MEC)在提供高性能、低延迟计算服务方面的作用日益明显。然而,在面向IoT业务的MEC(MEC-IoT)时变环境中,不同边缘设备和应用业务在时延和能耗等方面具有显著的异构性,对高效的任务卸载及资源分配构成严峻挑战。针对上述问题,该文提出一种动态的分布式异构任务卸载算法(D2HM),该算法利用分布式博弈机制并结合李雅普诺夫优化理论,设计了一种资源的动态报价机制,并实现了对不同业务类型差异化控制和计算资源的弹性按需分配,仿真结果表明,所提的算法可以满足异构任务的多样化计算需求,并在保证网络稳定性的前提下降低系统的平均时延。  相似文献   

6.
移动边缘计算利用部署在用户附近基站或具有空闲资源的路侧单元、车辆和MEC服务器作为网络的边缘,为设备提供所需的服务以及云端计算能力,以减少网络操作和服务交付的时延。文章将移动设备和MEC服务器的任务分配问题描述为一对一的匹配博弈,解决了移动边缘计算中的任务卸载问题。文章提出的算法具有良好的扩展性,并且能够降低总体能耗,使任务卸载时延最小化。  相似文献   

7.
针对云增强型光纤-无线(FiWi)网络能耗以及卸载的通信开销过大问题,该文提出一种自适应卸载压缩节能机制(ESAOC),针对不同类型的业务属性和最大的容忍时延,结合光网络单元的负载变化和无线网状网的流量情况,通过统计的方式获得不同优先级卸载数据的平均到达率,再结合各个节点的压缩时延,动态调整业务的卸载压缩比,以降低卸载的通信开销;同时,建立排队模型分析卸载业务在MEC服务器的排队时延,协同调度无线侧中继节点,进而对光网络单元和终端设备进行协同休眠调度,最大化休眠时长,提高系统能源效率。结果表明,所提方法在有效降低整个网络能耗的同时能够保证卸载业务的时延性能。  相似文献   

8.
在B5G/6G网络中,尽管无人机(Unmanned Aerial Vehicle, UAV)可以作为移动边缘计算(Mobile Edge Computing, MEC)的服务器为地面终端(Ground Terminal, GT)提供通信和计算服务,但仍然面临着因为移动性而导致通信链路被周围障碍物阻挡的挑战。可重构智能表面(Reconfigurable Intelligent Surface, RIS)可以有效地辅助UAV改善与GT的通信链路质量,保证MEC的时延要求。提出了一种RIS辅助的UAV轨迹和计算策略联合优化方案,以最小化MEC的服务能耗为目标,联合优化UAV的三维轨迹、计算任务分发和缓存资源分配。利用连续凸逼近(Successive Convex Approximation, SCA)方法对原始的非凸联合优化问题进行了求解。仿真实验中,选取UAV轨迹固定和计算策略固定的方案为对比依据,验证了所提方案的有效性。结果表明,所提方案在能耗和数据传输速率上均有明显的性能提升。  相似文献   

9.
在总结现有LTE(长期演进)系统时延预算的基础上,阐述5G在时延方面提升的关键技术,包括更短的TTI和资源块带内控制信道、增强的HARQ(混合自动重传)、MEC(移动边缘计算)。通过这些技术,可以达到控制面时延10 ms和用户面时延0.5 ms的要求,为低时延要求的应用提供技术保障。  相似文献   

10.
随着移动互联网的深入发展,各种大流量、低时延的业务不断出现,如何满足这些业务的需求,成为业界关注的话题.在这样的形势下,MEC(移动边缘计算)应运而生,三大运营商也进行了积极的探索和尝试. 在近日举办的“2017 MEC技术与产业发展峰会”上,中国移动研究院无线与终端技术研究所所长丁海煜表示,MEC已成为5G网络架构的重要组成部分,中国移动积极开展MEC技术研究,并在现网进行了试点验证和测试.  相似文献   

11.

Computation offloading at mobile edge computing (MEC) servers can mitigate the resource limitation and reduce the communication latency for mobile devices. Thereby, in this study, we proposed an offloading model for a multi-user MEC system with multi-task. In addition, a new caching concept is introduced for the computation tasks, where the application program and related code for the completed tasks are cached at the edge server. Furthermore, an efficient model of task offloading and caching integration is formulated as a nonlinear problem whose goal is to reduce the total overhead of time and energy. However, solving these types of problems is computationally prohibitive, especially for large-scale of mobile users. Thus, an equivalent form of reinforcement learning is created where the state spaces are defined based on all possible solutions and the actions are defined on the basis of movement between the different states. Afterwards, two effective Q-learning and Deep-Q-Network-based algorithms are proposed to derive the near-optimal solution for this problem. Finally, experimental evaluations verify that our proposed model can substantially minimize the mobile devices’ overhead by deploying computation offloading and task caching strategy reasonably.

  相似文献   

12.
绳韵  许晨  郑光远 《电信科学》2022,38(2):35-46
为了提高移动边缘计算(mobile edge computing,MEC)网络的频谱效率,满足大量用户的服务需求,建立了基于非正交多址接入(non-orthogonal multiple access,NOMA)的超密集MEC系统模型。为了解决多个用户同时卸载带来的严重通信干扰等问题,以高效利用边缘服务器资源,提出了一种联合任务卸载和资源分配的优化方案,在满足用户服务质量的前提下最小化系统总能耗。该方案联合考虑了卸载决策、功率控制、计算资源和子信道资源分配。仿真结果表明,与其他卸载方案相比,所提方案可以在满足用户服务质量的前提下有效降低系统能耗。  相似文献   

13.
Currently, people gain easy access to an increasingly diverse range of mobile devices such as personal digital assistants (PDAs), smart phones, and handheld computers. As dynamic content has become dominant on the fast-growing World Wide Web (C. Yuan et al., 2003), it is necessary to provide effective ways for the users to access such prevalent Web content in a mobile computing environment. During a course of browsing dynamic content on mobile devices, the requested content is first dynamically generated by remote Web server, then transmitted over a wireless network, and, finally, adapted for display' on small screens. This leads to considerable latency and processing load on mobile devices. By integrating a novel Web content adaptation algorithm and an enhanced caching strategy, we propose an adaptive scheme called MobiDNA for serving dynamic content in a mobile computing environment. To validate the feasibility and effectiveness of the proposed MobiDNA system, we construct an experimental testbed to investigate its performance. Experimental results demonstrate that this scheme can effectively improve mobile dynamic content browsing, by improving Web content readability on small displays, decreasing mobile browsing latency, and reducing wireless bandwidth consumption  相似文献   

14.
Recently, Internet energy efficiency is paid more and more attention. New Internet architectures with more energy efficiency were proposed to promote the scalability in energy consumption. The eontent-eentrie networking (CCN) proposed a content-centric paradigm which was proven to have higher energy efficiency. Based on the energy optimization model of CCN with in-network caching, the authors derive expressions to tradeoff the caching energy and the transport energy, and then design a new energy efficiency cache scheme based on virtual round trip time (EV) in CCN. Simulation results show that the EV scheme is better than the least recently used (LRU) and popularity based cache policies on the network average energy consumption, and its average hop is also much better than LRU policy.  相似文献   

15.
综合考虑内容中心网络(CCN)的能耗优化及性能提升,该文提出一种内容中心网络中能耗优化的隐式协作缓存机制。缓存决策时,利用缓存节能量作为判决条件优先在用户远端节点缓存,并利用数据包携带最近上游缓存跳数信息进行隐式协作,减轻用户近端节点缓存空间的竞争压力,提高邻近节点缓存的差异性。缓存替换时,选取缓存节能量最小的缓存内容加以替换,达到最优的能耗优化效果。仿真结果表明,该缓存机制在性能上获得较优的缓存命中率及平均路由跳数,同时有效降低了网络能耗。  相似文献   

16.
内置缓存技术是内容中心网络(Content Centric Networking, CCN)的核心技术之一。现有的研究大多主要针对网络资源利用率的优化,而忽略了网络能耗的问题。该文首先建立了一个能耗模型对CCN的网络能耗进行分析,并设计了一个能效判决条件来优化缓存过程的能效性。进而,在此基础上综合考虑内容流行度和节点中心性等因素提出一种能效感知的概率性缓存机制(E2APC)。仿真结果表明,该机制能在保证较高的缓存命中率和较小的平均响应跳数的同时有效地降低网络的整体能耗。  相似文献   

17.
We consider a basic scenario in wireless data access: a number of mobile clients are interested in a set of data items kept at a common server. Each client independently sends requests to inform the server of its desired data items and the server replies with a broadcast channel. We are interested in studying the energy consumption characteristics in such a scenario. First, we define a utility function for quantifying performance. Based on the utility function, we formulate the wireless data access scenario as a noncooperative game - wireless data access (WDA) game. Although our proposed probabilistic data access scheme does not rely on client caching, game theoretical analysis shows that clients do not always need to send requests to the server. Simulation results also indicate that our proposed scheme, compared with a simple always-request one, increases the utility and lifetime of every client while reducing the number of requests sent, with a cost of slightly larger average query delay. We also compare the performance of our proposed scheme with two popular schemes that employ client caching. Our results show that caching-only benefits clients with high query rates at the expense of both shorter lifetime and smaller utility in other clients.  相似文献   

18.
Uploading and downloading content have recently become one of the major reasons for the growth of Internet traffic volume. With the increasing popularity of social networking tools and their video upload/download applications, as well as the connectivity enhancements in wireless networks, it has become a second nature for mobile users to access on‐demand content on‐the‐go. Urban hot spots, usually implemented via wireless relays, answer the bandwidth need of those users. On the other hand, the same popular contents are usually acquired by a large number of users at different times, and fetching those from the initial content source each and every time makes inefficient use of network resources. In‐network caching provides a solution to this problem by bringing contents closer to the users. Although in‐network caching has been previously studied from latency and transport energy minimization perspectives, energy‐efficient schemes to prolong user equipment lifetime have not been considered. To address this problem, we propose the cache‐at‐relay (CAR) scheme, which utilizes wireless relays for in‐network caching of popular contents with content access and caching energy minimization objectives. CAR consists of three integer linear programming models, namely, select relay, place content, and place relay, which respectively solve content access energy minimization, joint minimization of content access and caching energy, and joint minimization of content access energy and relay deployment cost problems. We have shown that place relay significantly minimizes the content access energy consumption of user equipments, while place content provides a compromise between the content access and the caching energy budgets of the network. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
In-network caching is one of the most important issues in content centric networking (CCN), which may extremely influence the performance of the caching system. Although much work has been done for in-network caching scheme design in CCN, most of them have not addressed the multiple network attribute parameters jointly during caching algorithm design. Hence, to fill this gap, a new in-network caching based on grey relational analysis (GRA) is proposed. The authors firstly define two newly metric parameters named request influence degree (RID) and cache replacement rate, respectively. The RID indicates the importance of one node along the content delivery path from the view of the interest packets arriving The cache replacement rate is used to denote the caching load of the node. Then combining hops a request traveling from the users and the node traffic, four network attribute parameters are considered during the in-network caching algorithm design. Based on these four network parameters, a GRA based in-network caching algorithm is proposed, which can significantly improve the performance of CCN. Finally, extensive simulation based on ndnSIM is demonstrated that the GRA-based caching scheme can achieve the lower load in the source server and the less average hops than the existing the betweeness (Betw) scheme and the ALWAYS scheme.  相似文献   

20.
N.  D.  Y.   《Ad hoc Networks》2010,8(2):214-240
The production of cheap CMOS cameras, which are able to capture rich multimedia content, combined with the creation of low-power circuits, gave birth to what is called Wireless Multimedia Sensor Networks (WMSNs). WMSNs introduce several new research challenges, mainly related to mechanisms to deliver application-level Quality-of-Service (e.g., latency minimization). Such issues have almost completely been ignored in traditional WSNs, where the research focused on energy consumption minimization. Towards achieving this goal, the technique of cooperative caching multimedia content in sensor nodes can efficiently address the resource constraints, the variable channel capacity and the in-network processing challenges associated with WMSNs. The technological advances in gigabyte-storage flash memories make sensor caching to be the ideal solution for latency minimization. Though, with caching comes the issue of maintaining the freshness of cached contents. This article proposes a new cache consistency and replacement policy, called NICC, to address the cache consistency issues in a WMSN. The proposed policies recognize and exploit the mediator nodes that relay on the most “central” points in the sensor network so that they can forward messages with small latency. With the utilization of mediator nodes that lie between the source node and cache nodes, both push-based and pull-based strategies can be applied in order to minimize the query latency and the communication overhead. Simulation results attest that NICC outperforms the state-of-the-art cache consistency policy for MANETs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号