首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 187 毫秒
1.
张欢  江帆  孙长印 《信号处理》2021,37(7):1316-1323
为了提高雾无线接入网(Fog-Radio Access Networks,F-RAN)的边缘缓存效率,提出一种基于用户偏好预测和内容流行度预测的协作式内容缓存策略。首先,利用主题模型中隐含狄利克雷分布(Latent Dirichlet Allocation,LDA)模型动态的预测用户偏好;其次,利用网络中不同设备之间的拓扑关系和已预测的用户偏好以在线的方式预测内容流行度的变化,然后再结合基站之间的相关度,以减少缓存内容文件的重复率;最后,以最大化缓存命中率为目标,利用强化学习中的Q-learning算法获得了最优的内容缓存策略。仿真结果表明,与其他内容缓存策略相比,该内容缓存策略能有效的提高缓存命中率。   相似文献   

2.
陈龙  汤红波  罗兴国  柏溢  张震 《通信学报》2016,37(5):130-142
针对信息中心网络(ICN)内置缓存系统中的海量内容块流行度获取和存储资源高效利用问题,以最大化节省内容访问总代价为目标,建立针对内容块流行度的缓存收益优化模型,提出了一种基于收益感知的缓存机制。该机制利用缓存对请求流的过滤效应,在最大化单点缓存收益的同时潜在地实现节点间协作和多样化缓存;使用基于布隆过滤器的滑动窗口策略,在检测请求到达间隔时间的同时兼顾从源服务器获取内容的代价,捕获缓存收益高的内容块。分析表明,该方法能够大幅压缩获取内容流行度的存储空间开销;仿真结果表明,该方法能够较为准确地实现基于流行度的缓存收益感知,且在内容流行度动态变化的情况下,在带宽节省和缓存命中率方面更具优势。  相似文献   

3.
鄢欢  高德云  苏伟 《电子学报》2017,45(10):2313-2322
命名数据网络(Named Data Networking,NDN)是以内容为中心的新型网络架构,其随处缓存策略存在缓存冗余过多、邻居缓存利用率低等问题,导致缓存空间的浪费及缓存效率的低下.本文提出的融合沿路径非协作和路径外协作的缓存路由机制(K-Medoids Hash Routing,KMHR),使用K-medoids算法选取层次簇内的中心点,并针对不同流行度的内容分别采用Hash路由及最短路径路由,保证簇内高流行度内容的精确定位和唯一性,降低缓存冗余,提高缓存效率.通过真实网络拓扑仿真得出,KMHR机制具有最低的请求时间、最优的路由增益和较少的缓存内容数量.  相似文献   

4.
将边缘缓存技术引入雾无线接入网,可以有效减少内容传输的冗余。然而,现有缓存策略很少考虑已缓存内容的动态特性。该文提出一种基于内容流行度和信息新鲜度的缓存更新算法,该算法充分考虑用户的移动性以及内容流行度的时空动态性,并引入信息年龄(AoI)实现内容的动态更新。首先,所提出算法根据用户的历史位置信息,使用双向长短期记忆网络(Bi-LSTM)预测下一时间段用户位置;其次,根据预测得到的用户位置,结合用户的偏好模型得到各位置区的内容流行度,进而在雾接入点进行内容缓存。然后,针对已缓存内容的信息年龄要求,结合内容流行度分布,通过动态设置缓存更新窗口以实现高时效、低时延的内容缓存。仿真结果表明,所提算法可以有效地提高内容缓存命中率,在保障信息的时效性的同时最大限度地减小缓存内容的平均服务时延。  相似文献   

5.
基于节点动态内容流行度的缓存管理策略   总被引:1,自引:0,他引:1       下载免费PDF全文
张果  汪斌强  张震  梁超毅 《电子学报》2016,44(11):2704-2712
针对命名数据网络中节点无法感知内容流行度变化的缺陷,提出了基于缓存内容流行度动态变化的内容管理策略.将缓存分为主缓存(Primary Cache,PC)和副缓存(Secondary Cache,SC),分别用于识别和保护流行内容;采用标准布鲁姆过滤器(Standard Bloom Filter,SBF)过滤流行内容请求;引入滑动时间窗口算法和HASH表对副缓存内容进行细粒度的统计分析,进而管理缓存内容.仿真显示,与现有算法相比,该策略以增加少量复杂度为代价,延长高流行度内容的缓存驻留时间,提高了缓存命中率,减轻了服务器负载,并具有可扩展性,具备单线路40Gbit/s的报文处理能力.  相似文献   

6.
刘银龙  汪敏  周旭 《通信学报》2015,36(3):187-194
为降低P2P缓存系统中的全局开销,提出一种基于总开销最小的协作缓存策略。该策略综合考虑P2P缓存系统中的传输开销和存储开销,使用跨ISP域间链路开销、流行度、文件大小、存储开销来衡量文件的缓存增益。需要替换时,首先替换掉缓存增益最小的内容。实验结果表明,所提策略能够有效降低系统的总开销。  相似文献   

7.
针对车联网中海量内容即时交付的需求,研究了车辆协作的无线缓存架构,设计动态内容推送机制,解决高效缓存更新问题;综合车辆分布、内容流行度和文件时效性等要素定义缓存效能,引入非正交多址接入技术实现单时隙多文件推送;以最大化缓存命中率为目标,建立车辆和文件的协同优化问题,提出基于动态规划的交替迭代算法降低求解复杂度。仿真结果表明,所提方案相比传统方案至少降低16.8%的时延。  相似文献   

8.
蔡君  余顺争  刘外喜 《通信学报》2015,(6):2015222-2015222
全网内置缓存是ICN(信息中心网络)架构中最重要的特性之一。为使被缓存的内容对象在空间和时间上分布更合理,提出了一种基于节点社团重要度的缓存策略(CSNIC)。该策略以社团为单位,不仅把内容缓存到社团内用户容易获取的节点处,而且使不同流行度的内容对象在各社团内节点处的时间分布上更合理。在多种实验条件下对CSNIC策略进行了仿真验证,结果表明该策略与CEE-LRU、Betw-LFU、Opportunistic相比,能更好地提升包括缓存命中率、跳数减少率、内容差异性及替换数量等在内的网络缓存性能指标,而且,CSNIC策略的额外开销较小。  相似文献   

9.
针对命名数据网络(Named Data Networking, NDN)存储空间的有效利用和应答内容的高效缓存问题,该文采用差异化缓存的方式,提出一种依据内容请求序列相关性的协作缓存算法。在内容请求中,预先发送对于后续相关数据单元的并行预测请求,增大内容请求的就近响应概率;缓存决策时,提出联合空间存储位置与缓存驻留时间的2维差异化缓存策略。根据内容活跃度的变化趋势,空间维度上逐跳推进内容存储位置,时间维度上动态调整内容缓存时间,以渐进式的方式将真正流行的请求内容推送至网络边缘存储。该算法减小了内容请求时延和缓存冗余,提高了缓存命中率,仿真结果验证了其有效性。  相似文献   

10.
一种基于内容流行度的内容中心网络缓存概率置换策略   总被引:4,自引:0,他引:4  
内容中心网络是下一代互联网架构的研究热点,该架构通过分布式内容缓存机制实现以内容为核心的数据传输,改变了传统基于主机的网络通信模式.缓存置换策略是内容中心网络的核心研究问题之一,缓存置换策略的设计优劣直接制约内容中心网络的数据传输性能.该文针对内容的流行度分布特征,提出一种基于流行度的缓存概率置换策略,并针对 L 层级联的内容中心网络(CCN),推导了该策略下的网络分层缓存请求失败概率近似计算公式.该文策略根据请求数据块的流行度而选择数据块在缓存队列中的置换位置,尽可能平衡不同流行度内容在网络中的分布.数值计算结果表明:该策略适用于内容请求集中的网络应用,相比较传统的最近最少使用(LRU)策略,该文策略可以明显改善流行度较低的网络访问性能.  相似文献   

11.
In this paper, we investigate an incentive edge caching mechanism for an internet of vehicles (IoV) system based on the paradigm of software‐defined networking (SDN). We start by proposing a distributed SDN‐based IoV architecture. Then, based on this architecture, we focus on the economic side of caching by considering competitive cache‐enablers market composed of one content provider (CP) and multiple mobile network operators (MNOs). Each MNO manages a set of cache‐enabled small base stations (SBS). The CP incites the MNOs to store its popular contents in cache‐enabled SBSs with highest access probability to enhance the satisfaction of its users. By leasing their cache‐enabled SBSs, the MNOs aim to make more monetary profit. We formulate the interaction between the CP and the MNOs, using a Stackelberg game, where the CP acts first as the leader by announcing the popular content quantity that it which to cache and fixing the caching popularity threshold, a minimum access probability under it a content cannot be cached. Then, MNOs act subsequently as followers responding by the content quantity they accept to cache and the corresponding caching price. A noncooperative subgame is formulated to model the competition between the followers on the CP's limited content quantity. We analyze the leader and the follower's optimization problems, and we prove the Stackelberg equilibrium (SE). Simulation results show that our game‐based incentive caching model achieves optimal utilities and outperforms other incentive caching mechanisms with monopoly cache‐enablers whilst enhancing 30% of the user's satisfaction and reducing the caching cost.  相似文献   

12.
Internet of Things (IoT) has emerged as one of the new use cases in the 5th Generation wireless networks. However, the transient nature of the data generated in IoT networks brings great challenges for content caching. In this paper, we study a joint content caching and updating strategy in IoT networks, taking both the energy consumption of the sensors and the freshness loss of the contents into account. In particular, we decide whether or not to cache the transient data and, if so, how often the servers should update their contents. We formulate this content caching and updating problem as a mixed 0–1 integer non-convex optimization programming, and devise a Harmony Search based content Caching and Updating (HSCU) algorithm, which is self-learning and derivative-free and hence stipulates no requirement on the relationship between the objective and variables. Finally, extensive simulation results verify the effectiveness of our proposed algorithm in terms of the achieved satisfaction ratio for content delivery, normalized energy consumption, and overall network utility, by comparing it with some baseline algorithms.  相似文献   

13.
Uploading and downloading content have recently become one of the major reasons for the growth of Internet traffic volume. With the increasing popularity of social networking tools and their video upload/download applications, as well as the connectivity enhancements in wireless networks, it has become a second nature for mobile users to access on‐demand content on‐the‐go. Urban hot spots, usually implemented via wireless relays, answer the bandwidth need of those users. On the other hand, the same popular contents are usually acquired by a large number of users at different times, and fetching those from the initial content source each and every time makes inefficient use of network resources. In‐network caching provides a solution to this problem by bringing contents closer to the users. Although in‐network caching has been previously studied from latency and transport energy minimization perspectives, energy‐efficient schemes to prolong user equipment lifetime have not been considered. To address this problem, we propose the cache‐at‐relay (CAR) scheme, which utilizes wireless relays for in‐network caching of popular contents with content access and caching energy minimization objectives. CAR consists of three integer linear programming models, namely, select relay, place content, and place relay, which respectively solve content access energy minimization, joint minimization of content access and caching energy, and joint minimization of content access energy and relay deployment cost problems. We have shown that place relay significantly minimizes the content access energy consumption of user equipments, while place content provides a compromise between the content access and the caching energy budgets of the network. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
针对内容中心网络(Content Centric Networking,CCN)如何提供差异化的业务需求服务的问题,采用区分服务的思想,从内容传输和缓存决策的角度出发,提出了一种基于业务类型的多样化内容分发机制.该机制依据不同的业务请求特征,分别设计了持久推送、并行预测和逐包请求的数据分发模式,对应提出了透明转发、边缘概率缓存和渐进式推进的沿途存储策略,实现了内容传递对于业务类型的感知和匹配.仿真结果表明,该机制减小了内容请求时延,提高了缓存命中率,以少量额外的控制开销提升了CCN网络整体的内容分发性能.  相似文献   

15.
Device‐to‐device (D2D) communications have been viewed as a promising data offloading solution in cellular networks because of the explosive growth of multimedia applications. Because of the nature of distributed device location, distributed caching becomes an important function of D2D communications. By taking advantage of the caching capacity of the device, in this work, we explore the device storage and file frequent reuse to realize distributed content dissemination, that is, storing contents in mobile devices (named helpers). Specifically, we first investigate the average and lower bound of helper amount by dividing the network into small areas where the nodes are within each other's communication radius. Then, optimal helper amount is derived based on average helper amount and network topology. Subsequently, a location‐based distributed helper selection scheme for distributed caching is proposed based on the given optimal helper amount. In particular, nodes are selected as helpers according to their locations and degrees, and contents are placed in the manner for maximizing total user utility. Extensive simulation results demonstrate the factors that affect the optimal helper amount and the total user utility. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
The explosive growth of mobile data traffic has made cellular operators to seek low‐cost alternatives for cellular traffic off‐loading. In this paper, we consider a content delivery network where a vehicular communication network composed of roadside units (RSUs) is integrated into a cellular network to serve as an off‐loading platform. Each RSU subjecting to its storage capacity caches a subset of the contents of the central content server. Allocating the suitable subset of contents in each RSU cache such that maximizes the hit ratio of vehicles requests is a problem of paramount value that is targeted in this study. First, we propose a centralized solution in which, we model the cache content placement problem as a submodular maximization problem and show that it is NP‐hard. Second, we propose a distributed cooperative caching scheme, in which RSUs in an area periodically share information about their contents locally and thus update their cache. To this end, we model the distributed caching problem as a strategic resource allocation game that achieves at least 50% of the optimal solution. Finally, we evaluate our scheme using simulation for urban mobility simulator under realistic conditions. On average, the results show an improvement of 8% in the hit ratio of the proposed method compared with other well‐known cache content placement approaches.  相似文献   

17.
To reduce fetching cost from a remote source,it is natural to cache information near the users who may access the information later.However,with development of 5 G ultra-dense cellular networks andmobile edge computing(MEC),a reasonable selection among edge servers for content delivery becomes a problem when the mobile edge obtaining sufficient replica servers.In order to minimize the total cost accounting for both caching and fetching process,we study the optimal resource allocation for the content replica servers’ deployment.We decompose the total cost as the superposition of cost in several coverages.Particularly,we consider the criterion for determining the coverage of a replica server and formulate the coverage as a tradeoff between caching cost and fetching cost.According to the criterion,a coverage isolation(CI) algorithm is proposed to solve the deployment problem.The numerical results show that the proposed CI algorithm can reduce the cost and obtain a higher tolerance for different centrality indices.  相似文献   

18.
针对内容中心网络中同质化缓存造成的节点负载不均和存储资源无法有效利用的问题,提出一种热区控制及内容调度缓存算法.在内容请求时,根据节点介数与节点访问度综合判断节点热度,筛选出负载过重节点;缓存决策时,以流行度为依据将内容调度至空闲邻居节点,并设置生命期控制机制,从而达到分散请求、消除热区的目的.仿真结果表明该算法减少了请求时延与路由跳数,在提高缓存命中率的同时,有效改善了负载在节点上分布的均衡性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号