首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
王珂  江凌云  董唱 《通信技术》2020,(3):678-683
缓存替换技术是内容中心网络的研究内容之一。相对于海量的内容数据,缓存空间总是有限的,良好的缓存替换策略可以提高缓存收益。考虑到内容在将来被请求的概率越大,预期访问时间距离当前时刻越近,其留存价值就越大,提出了一种基于内容预期价值的替换策略。该方案通过考虑内容流行度和预期访问时间到当前时刻的时间距离,构建内容预期价值函数,并据此将价值最小的内容替换出去。仿真实验证明,该策略相对于传统的缓存替换策略,可以有效提高缓存命中率、降低请求跳数,提高网络性能。  相似文献   

2.
带有消息投递概率估计的机会网络自适应缓存管理策略   总被引:1,自引:0,他引:1  
高效的缓存管理策略能够有效提高机会网络中节点的缓存资源利用率。消息的投递概率直接决定了消息的转发与存储必要性,该文提出一种带有消息投递概率估计的自适应缓存管理策略,通过构建节点连接状态分析模型,以分布式的方式感知节点服务能力,从而估计消息的投递概率,进而确定消息的转发与删除优先级,以执行缓存管理相关操作。结果表明,所提出的缓存管理策略可降低网络负载57%,并有效提高消息的成功投递率,降低消息的平均投递时延。  相似文献   

3.
将信息中心网络(ICN)应用到物联网(IoT)架构(ICN-IoT),可以有效地解决数据分发问题,提高数据的传输效率。但在ICN-IoT中,现有的缓存研究主要是在内容流行度或新鲜度等单一维度上实现缓存配置,无法适应海量和多态的物联网数据特征,导致缓存效率低。针对上述问题,该文首先分析了物联网数据特征,将数据分为周期性数据和事件触发性数据。然后,综合考虑物联网的这两种数据特征,提出一种具有不同缓存决策的ICN-IoT缓存方案(CS-DCI),路由器根据到达数据的特征类型执行相应的缓存决策。最后详细介绍两种数据类型的缓存策略,对于周期性数据,考虑内容流行度和时间请求概率缓存用户请求最多的数据;对于事件触发性数据,考虑内容流行度和事件触发频率缓存有意义的数据。仿真表明,该方案能够提高内容差异率,增加缓存内容的多样性,从而满足ICN-IoT不同应用的请求,获得较优的缓存命中率以及减少内容获取跳数。  相似文献   

4.
将信息中心网络(ICN)应用到物联网(IoT)架构(ICN-IoT),可以有效地解决数据分发问题,提高数据的传输效率.但在ICN-IoT中,现有的缓存研究主要是在内容流行度或新鲜度等单一维度上实现缓存配置,无法适应海量和多态的物联网数据特征,导致缓存效率低.针对上述问题,该文首先分析了物联网数据特征,将数据分为周期性数据和事件触发性数据.然后,综合考虑物联网的这两种数据特征,提出一种具有不同缓存决策的ICN-IoT缓存方案(CS-DCI),路由器根据到达数据的特征类型执行相应的缓存决策.最后详细介绍两种数据类型的缓存策略,对于周期性数据,考虑内容流行度和时间请求概率缓存用户请求最多的数据;对于事件触发性数据,考虑内容流行度和事件触发频率缓存有意义的数据.仿真表明,该方案能够提高内容差异率,增加缓存内容的多样性,从而满足ICN-IoT不同应用的请求,获得较优的缓存命中率以及减少内容获取跳数.  相似文献   

5.
陈龙  汤红波  罗兴国  柏溢  张震 《通信学报》2016,37(5):130-142
针对信息中心网络(ICN)内置缓存系统中的海量内容块流行度获取和存储资源高效利用问题,以最大化节省内容访问总代价为目标,建立针对内容块流行度的缓存收益优化模型,提出了一种基于收益感知的缓存机制。该机制利用缓存对请求流的过滤效应,在最大化单点缓存收益的同时潜在地实现节点间协作和多样化缓存;使用基于布隆过滤器的滑动窗口策略,在检测请求到达间隔时间的同时兼顾从源服务器获取内容的代价,捕获缓存收益高的内容块。分析表明,该方法能够大幅压缩获取内容流行度的存储空间开销;仿真结果表明,该方法能够较为准确地实现基于流行度的缓存收益感知,且在内容流行度动态变化的情况下,在带宽节省和缓存命中率方面更具优势。  相似文献   

6.
针对设备到设备(D2D)缓存中基站信号覆盖范围有限导致的难以获得足够数据来预测用户偏好的问题,提出了一种基于图协同过滤模型的D2D协作缓存策略。首先,构建图协同过滤模型,通过多层图卷积神经网络捕捉用户-内容交互图中的高阶连通信息,并利用多层感知机学习用户和内容之间的非线性关系来预测用户偏好。其次,为了最小化平均访问时延,综合考虑用户偏好和缓存时延收益,将缓存内容放置问题建模为马尔可夫决策过程模型,设计基于深度强化学习的协作缓存算法进行求解。仿真实验表明,与现有的缓存策略相比,所提缓存策略在不同的内容种类、用户密度和D2D通信距离参数下均取得了最优的性能效果。  相似文献   

7.
视频内容的海量、高清化,使得存储容量和网络带宽供不应求。CDN包含有多个节点,但每个节点不可能存储所有内容,缓存替换方法是实现CDN节点存储的内容能够最大可能满足用户需求的关键。提出了一种基于自然冷却机制的视频内容缓存替换方法,结合用户访问行为,引入同一内容的不同分片之间存在的热度关联性和牛顿冷却定律作为缓存替换的关键因素,来决策如何进行缓存替换。实验表明该方法可以显著提高CDN节点的缓存效率。  相似文献   

8.
命名数据网络中基于局部请求相似性的协作缓存路由机制   总被引:1,自引:0,他引:1  
该文针对命名数据网络(Named Data Networking, NDN)应答内容的高效缓存和利用问题,依据内容请求分布的局域相似特征,提出一种协作缓存路由机制。缓存决策时,将垂直请求路径上的冗余消除和水平局域范围内的内容放置进行有效结合。垂直方向上,提出基于最大内容活跃因子的路径缓存策略,确定沿途转发对应的最大热点请求区域;水平方向上,采用一致性Hash协同缓存思想,实现应答内容的局域定向存储。路由查找时,将局域节点缓存引入到路由转发决策中,依据内容活跃等级动态执行局域缓存查找,增大内容请求就近响应概率。该机制减小了内容请求时延和缓存冗余,提高了缓存命中率,以少量额外的代价换取了内容请求开销的大幅下降,仿真结果验证了其有效性。  相似文献   

9.
在信息中心网络(Information-Centric Network, ICN)中,利用网络内置缓存提高内容获取及传输效率是该网络构架最重要的特性。然而,网络内置的缓存存在应对大量的需要转发的内容时能力相对弱小,对内容放置缺乏均衡分布的问题。该文提出基于内容流行度和节点中心度匹配的缓存策略(Popularity and Centrality Based Caching Scheme, PCBCS),通过对经过的内容进行选择性缓存来提高内容分发沿路节点的缓存空间使用效率,减少缓存冗余。仿真结果表明,该文提出的算法和全局沿路缓存决策方案,LCD(Leave Copy Down)以及参数为0.7及0.3的Prob(copy with Probability)相比较,在服务器命中率上平均减少30%,在命中缓存内容所需的跳数上平均减少20%,最重要的是,和全局沿路缓存决策方案相比总体缓存替换数量平均减少了40%。  相似文献   

10.
针对间断连接无线网络中的节点缓存资源有限的问题,该文提出一种适用于间断连接无线网络的缓存管理机制。根据运动过程中所获得的网络状态信息,各个节点以分布式的方式估计给定节点与其他节点直接及间接连接状态、节点服务率以及节点连通强度,动态感知各个节点服务能力的差异,同时预测当前节点成功投递该消息的概率以感知消息的效用值,从而执行缓存管理操作。结果表明,与其他缓存管理机制相比,所提出的缓存管理机制不仅能够有效降低投递开销,同时大幅度地提高了消息成功投递率。  相似文献   

11.
This paper presents a caching algorithm that offers better reconstructed data quality to the requesters than a probabilistic caching scheme while maintaining comparable network performance. It decides whether an incoming data packet must be cached based on the dynamic caching probability, which is adjusted according to the priorities of content carried by the data packet, the uncertainty of content popularities, and the records of cache events in the router. The adaptation of caching probability depends on the priorities of content, the multiplication factor adaptation, and the addition factor adaptation. The multiplication factor adaptation is computed from an instantaneous cache‐hit ratio, whereas the addition factor adaptation relies on a multiplication factor, popularities of requested contents, a cache‐hit ratio, and a cache‐miss ratio. We evaluate the performance of the caching algorithm by comparing it with previous caching schemes in network simulation. The simulation results indicate that our proposed caching algorithm surpasses previous schemes in terms of data quality and is comparable in terms of network performance.  相似文献   

12.
The explosive growth of mobile data traffic has made cellular operators to seek low‐cost alternatives for cellular traffic off‐loading. In this paper, we consider a content delivery network where a vehicular communication network composed of roadside units (RSUs) is integrated into a cellular network to serve as an off‐loading platform. Each RSU subjecting to its storage capacity caches a subset of the contents of the central content server. Allocating the suitable subset of contents in each RSU cache such that maximizes the hit ratio of vehicles requests is a problem of paramount value that is targeted in this study. First, we propose a centralized solution in which, we model the cache content placement problem as a submodular maximization problem and show that it is NP‐hard. Second, we propose a distributed cooperative caching scheme, in which RSUs in an area periodically share information about their contents locally and thus update their cache. To this end, we model the distributed caching problem as a strategic resource allocation game that achieves at least 50% of the optimal solution. Finally, we evaluate our scheme using simulation for urban mobility simulator under realistic conditions. On average, the results show an improvement of 8% in the hit ratio of the proposed method compared with other well‐known cache content placement approaches.  相似文献   

13.
In order to optimize the replica placement in information centric networking,an edge-first-based cooperative caching strategy (ECCS) was proposed.According to the strategy,cache decision was made during the interest forwarding stage.The decision result and statistic information would been forwarded to upstream routers step by step.Utilizing the information,upstream nodes could update their cache information table immediately to achieve cooperative caching.The experimental results indicate ECCS can achieve salient performance gain in terms of server load reduction ratio,average hop reduction ratio,average cache hit ratio,compared with current strategies.  相似文献   

14.
In recent years, named data networking (NDN) has been accepted as the most popular future paradigm and attracted much attention, of which the routing model contains interest forwarding and content delivery. However, interest forwarding is far from the bottleneck of routing optimization; instead, the study on content delivery can greatly promote routing performance. Although many proposals on content delivery have been investigated, they have not considered packet‐level caching and deep traffic aggregation, which goes against the performance optimization of content delivery. In this paper, we propose a packet‐level‐based traffic aggregation (PLTA) scheme to optimize NDN content delivery. At first, the packet format is devised, and data plane development kit (DPDK) is used to ensure same size for each packet. Then, the whole delivery scheme with traffic aggregation consideration is presented. The simulation is driven by the real YouTube dataset over Deltacom, NSFNET, and CERNET topologies, and the experimental results demonstrate that the proposed PLTA has better delivery performance than three baselines in terms of cache hit ratio, delivery delay, network load, and energy efficiency.  相似文献   

15.
Edge caching is an effective feature of the next 5G network to guarantee the availability of the service content and a reduced time response for the user. However, the placement of the cache content remains an issue to fully take advantage of edge caching. In this paper, we address the proactive caching problem in Heterogeneous Cloud Radio Access Network (H‐CRAN) from a game theoretic point of view. The problem is formulated as a bargaining game where the remote radio heads (RRHs) dynamically negotiate and decide which content to cache in which RRH under energy saving and cache capacity constraints. The Pareto optimal equilibrium is proved for the cooperative game by the iterative Nash bargaining algorithm. We compare between cooperative and noncooperative proactive caching games and demonstrate how the selfishness of different players can affect the overall system performance. We also showed that our cooperative proactive caching game improves the energy consumption of 40% as compared with noncooperative game and of 68% to no‐game strategy. Moreover, the number of satisfied requests at the RRHs with the proposed cooperative proactive caching scheme is significantly increased.  相似文献   

16.
蔡艳  吴凡  朱洪波 《通信学报》2021,(3):183-189
为了满足5G系统低时延高可靠的需求,针对单缓存终端直传(D2D)协作边缘缓存系统,提出了一种基于传输时延的缓存策略。运用随机几何理论,将请求用户和空闲用户的动态分布建模为相互独立的齐次泊松点过程,综合考虑内容流行度、用户位置信息、设备传输功率以及干扰,推导出用户的平均传输时延与缓存概率分布的关系式。以平均传输时延为目标函数建立优化问题,提出了一个低复杂度的迭代算法,得到平均传输时延次优的缓存策略。仿真结果表明,该缓存策略在传输时延方面优于常见的几种缓存策略。  相似文献   

17.
In this paper, we investigate an incentive edge caching mechanism for an internet of vehicles (IoV) system based on the paradigm of software‐defined networking (SDN). We start by proposing a distributed SDN‐based IoV architecture. Then, based on this architecture, we focus on the economic side of caching by considering competitive cache‐enablers market composed of one content provider (CP) and multiple mobile network operators (MNOs). Each MNO manages a set of cache‐enabled small base stations (SBS). The CP incites the MNOs to store its popular contents in cache‐enabled SBSs with highest access probability to enhance the satisfaction of its users. By leasing their cache‐enabled SBSs, the MNOs aim to make more monetary profit. We formulate the interaction between the CP and the MNOs, using a Stackelberg game, where the CP acts first as the leader by announcing the popular content quantity that it which to cache and fixing the caching popularity threshold, a minimum access probability under it a content cannot be cached. Then, MNOs act subsequently as followers responding by the content quantity they accept to cache and the corresponding caching price. A noncooperative subgame is formulated to model the competition between the followers on the CP's limited content quantity. We analyze the leader and the follower's optimization problems, and we prove the Stackelberg equilibrium (SE). Simulation results show that our game‐based incentive caching model achieves optimal utilities and outperforms other incentive caching mechanisms with monopoly cache‐enablers whilst enhancing 30% of the user's satisfaction and reducing the caching cost.  相似文献   

18.
To overcome the problems of the on-path caching schemes in the content centric networking,a coordinated caching scheme based on the node with the max betweenness value and edge node was designed.According to the topol-ogy characteristics,the popular content was identified at the node with the max betweenness value and tracked at the edge node.The on-path caching location was given by the popularity and the cache size.Simulation results show that,com-pared with the classical schemes,this scheme promotes the cache hit ratio and decreases the average hop ratio,thus en-hancing the efficiency of the cache system.  相似文献   

19.
This paper investigates the content placement problem to maximize the cache hit ratio in device-to-device(D2D)communications overlaying cellular networks.We consider offloading contents by users themselves,D2D communications and multicast,and we analyze the relationship between these offloading methods and the cache hit ratio.Based on this relationship,we formulate the content placement optimization as a cache hit ratio maximization problem,and propose a heuristic algorithm to solve it.Numerical results demonstrate that the proposed scheme can outperform existing schemes in terms of the cache hit ratio.  相似文献   

20.
黄丹  宋荣方 《电信科学》2018,34(11):59-66
缓存替换机制是内容中心网络的重要研究问题之一,考虑到缓存空间的有限性,合理地对缓存内容进行置换,成为影响网络整体性能的关键因素。因此,设计了一种基于内容价值的缓存替换方案。该方案综合考虑了内容的动态流行度、缓存代价以及最近被请求的时间,构建了更实际的内容价值函数,并依据该内容价值函数,设计了有效的内容存储与置换方案。具体地,当缓存空间不足时,对已有缓存内容按照价值从小到大进行置换。仿真结果表明,相比于传统替换算法 LRU、LFU 和 FIFO,本文提出的方案有效地提升了网络节点的内容缓存命中率,降低了用户获取内容的平均跳数。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号