首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
蔡艳  吴凡  朱洪波 《通信学报》2021,(3):183-189
为了满足5G系统低时延高可靠的需求,针对单缓存终端直传(D2D)协作边缘缓存系统,提出了一种基于传输时延的缓存策略。运用随机几何理论,将请求用户和空闲用户的动态分布建模为相互独立的齐次泊松点过程,综合考虑内容流行度、用户位置信息、设备传输功率以及干扰,推导出用户的平均传输时延与缓存概率分布的关系式。以平均传输时延为目标函数建立优化问题,提出了一个低复杂度的迭代算法,得到平均传输时延次优的缓存策略。仿真结果表明,该缓存策略在传输时延方面优于常见的几种缓存策略。  相似文献   

2.
针对命名数据网络(Named Data Networking, NDN)存储空间的有效利用和应答内容的高效缓存问题,该文采用差异化缓存的方式,提出一种依据内容请求序列相关性的协作缓存算法。在内容请求中,预先发送对于后续相关数据单元的并行预测请求,增大内容请求的就近响应概率;缓存决策时,提出联合空间存储位置与缓存驻留时间的2维差异化缓存策略。根据内容活跃度的变化趋势,空间维度上逐跳推进内容存储位置,时间维度上动态调整内容缓存时间,以渐进式的方式将真正流行的请求内容推送至网络边缘存储。该算法减小了内容请求时延和缓存冗余,提高了缓存命中率,仿真结果验证了其有效性。  相似文献   

3.
吴海博  李俊  智江 《通信学报》2016,37(5):62-72
提出一种基于概率存储的启发式住处中心网络内容缓存方法(PCP)。主要思想是请求消息和数据消息在传输过程中统计必要信息,当数据消息返回时,沿途各缓存节点按照一定概率决策是否在本地缓存该内容。设计缓存概率时综合考虑内容热度和缓存放置收益,即内容热度越高,放置收益越大的内容被缓存的概率越高。实验结果表明,PCP在缓存服务率、缓存命中率、平均访问延迟率等方面,与现有方法相比具有显著优势,同时PCP开销较小。  相似文献   

4.
Edge caching is an effective feature of the next 5G network to guarantee the availability of the service content and a reduced time response for the user. However, the placement of the cache content remains an issue to fully take advantage of edge caching. In this paper, we address the proactive caching problem in Heterogeneous Cloud Radio Access Network (H‐CRAN) from a game theoretic point of view. The problem is formulated as a bargaining game where the remote radio heads (RRHs) dynamically negotiate and decide which content to cache in which RRH under energy saving and cache capacity constraints. The Pareto optimal equilibrium is proved for the cooperative game by the iterative Nash bargaining algorithm. We compare between cooperative and noncooperative proactive caching games and demonstrate how the selfishness of different players can affect the overall system performance. We also showed that our cooperative proactive caching game improves the energy consumption of 40% as compared with noncooperative game and of 68% to no‐game strategy. Moreover, the number of satisfied requests at the RRHs with the proposed cooperative proactive caching scheme is significantly increased.  相似文献   

5.
内容中心网络中面向隐私保护的协作缓存策略   总被引:2,自引:0,他引:2  
针对内容中心网络节点普遍缓存带来的隐私泄露问题,在兼顾内容分发性能的基础上,该文提出一种面向隐私保护的协作缓存策略。该策略从信息熵的角度提出隐私度量指标,以增大攻击者的不确定度为目标,首先对于缓存策略的合理性给予证明;其次,通过构建空间匿名区域,扩大用户匿名集合,增大缓存内容的归属不确定性。缓存决策时,针对垂直请求路径和水平匿名区域,分别提出沿途热点缓存和局域hash协同的存储策略,减小缓存冗余和隐私信息泄露。仿真结果表明,该策略可减小内容请求时延,提高缓存命中率,在提升内容分发效率的同时增强了用户隐私保护水平。  相似文献   

6.
陈龙  汤红波  罗兴国  柏溢  张震 《通信学报》2016,37(5):130-142
针对信息中心网络(ICN)内置缓存系统中的海量内容块流行度获取和存储资源高效利用问题,以最大化节省内容访问总代价为目标,建立针对内容块流行度的缓存收益优化模型,提出了一种基于收益感知的缓存机制。该机制利用缓存对请求流的过滤效应,在最大化单点缓存收益的同时潜在地实现节点间协作和多样化缓存;使用基于布隆过滤器的滑动窗口策略,在检测请求到达间隔时间的同时兼顾从源服务器获取内容的代价,捕获缓存收益高的内容块。分析表明,该方法能够大幅压缩获取内容流行度的存储空间开销;仿真结果表明,该方法能够较为准确地实现基于流行度的缓存收益感知,且在内容流行度动态变化的情况下,在带宽节省和缓存命中率方面更具优势。  相似文献   

7.
To overcome the problems of the on-path caching schemes in the content centric networking,a coordinated caching scheme based on the node with the max betweenness value and edge node was designed.According to the topol-ogy characteristics,the popular content was identified at the node with the max betweenness value and tracked at the edge node.The on-path caching location was given by the popularity and the cache size.Simulation results show that,com-pared with the classical schemes,this scheme promotes the cache hit ratio and decreases the average hop ratio,thus en-hancing the efficiency of the cache system.  相似文献   

8.
The explosive growth of mobile data traffic has made cellular operators to seek low‐cost alternatives for cellular traffic off‐loading. In this paper, we consider a content delivery network where a vehicular communication network composed of roadside units (RSUs) is integrated into a cellular network to serve as an off‐loading platform. Each RSU subjecting to its storage capacity caches a subset of the contents of the central content server. Allocating the suitable subset of contents in each RSU cache such that maximizes the hit ratio of vehicles requests is a problem of paramount value that is targeted in this study. First, we propose a centralized solution in which, we model the cache content placement problem as a submodular maximization problem and show that it is NP‐hard. Second, we propose a distributed cooperative caching scheme, in which RSUs in an area periodically share information about their contents locally and thus update their cache. To this end, we model the distributed caching problem as a strategic resource allocation game that achieves at least 50% of the optimal solution. Finally, we evaluate our scheme using simulation for urban mobility simulator under realistic conditions. On average, the results show an improvement of 8% in the hit ratio of the proposed method compared with other well‐known cache content placement approaches.  相似文献   

9.
Aiming at the problem of reducing the load of the backward link in the edge buffer and fog wireless access network technology,a multi-tier cooperative caching scheme in F-RAN was proposed to further reduce the backhaul traffic load.In particular,by considering the network topology,content popularity prediction and link capacity,the optimization problem was decomposed into knapsack subproblems in multi-tiers,and effective greedy algorithms were proposed to solve the corresponding subproblems.Simulation results show that the proposed multi-tier cooperative caching scheme can effectively reduce the backhaul traffic and achieve relatively high cache hit rate.  相似文献   

10.
张涛  李强  张继良  张蔡霞 《电子学报》2017,45(11):2649-2655
为了缓解海量的移动业务数据与容量受限的无线接入网回传链路之间的矛盾,本文提出一种面向软件定义无线接入网(SD-RAN)的协作内容缓存网络架构.在宏蜂窝基站(MBS)的控制管理下,小蜂窝基站(SBS)可以在存储单元有序存储一些高流行度的内容.针对SBS存储单元空间受限问题,进一步提出SD-RAN网络架构下的协作内容缓存算法.该算法中,每个SBS缓存空间被分割成两部分:(1)用于存储全网流行度最高的公共内容以保证各小蜂窝小区本地命中率.(2)用于存储流行度较高的差异化的内容以促进MBS内SBS之间的协作.在此基础上,解析推导具有最优平均内容获取开销的分割参数闭合表达式.仿真结果表明该算法在不同系统参数条件下能显著降低SD-RAN的平均内容获取开销.  相似文献   

11.
To address the vast multimedia traffic volume and requirements of user quality of experience in the next‐generation mobile communication system (5G), it is imperative to develop efficient content caching strategy at mobile network edges, which is deemed as a key technique for 5G. Recent advances in edge/cloud computing and machine learning facilitate efficient content caching for 5G, where mobile edge computing can be exploited to reduce service latency by equipping computation and storage capacity at the edge network. In this paper, we propose a proactive caching mechanism named learning‐based cooperative caching (LECC) strategy based on mobile edge computing architecture to reduce transmission cost while improving user quality of experience for future mobile networks. In LECC, we exploit a transfer learning‐based approach for estimating content popularity and then formulate the proactive caching optimization model. As the optimization problem is NP‐hard, we resort to a greedy algorithm for solving the cache content placement problem. Performance evaluation reveals that LECC can apparently improve content cache hit rate and decrease content delivery latency and transmission cost in comparison with known existing caching strategies.  相似文献   

12.
Existing cooperative caching algorithms for mobile ad hoc networks face serious challenges due to message overhead and scalability issues. To solve these issues, we propose an adaptive virtual backbone based cooperative caching that uses a connective dominating set (CDS) to find the desired location of cached data. Message overhead in cooperative caching is mainly due to cache lookup process used for cooperative caching. The idea in this scheme is to reduce the number of nodes involved in cache look up process, by constructing a virtual backbone adaptive to the dynamic topology in mobile ad hoc networks. The proposed algorithm is decentralized and the nodes in the CDS perform data dissemination and discovery. Simulation results show that the message overhead created by the proposed cooperative caching technique is very less compared to other approaches. Moreover, due to the CDS based cache discovery we applied in this work, the proposed cooperative caching has the potential to increase the cache hit ratio and reduce average delay.  相似文献   

13.
The quality of user experience suffers from performance deterioration dramatically due to the explosively growing data traffic.To improve the poor performance of cell-edge users and heavy-load cell users,which caused by dense network and load imbalance respectively,an QoE-aware video cooperative caching and transmission mechanism in cloud radio access network was proposed.Cooperative gain-aware virtual passive optical network was established to provide cooperative caching and transmission for video streaming by adopting collaborative approach in optical domain and wireless domain.Furthermore,user experience for video streaming,bandwidth provisioning and caching strategy were jointly optimized to improve QoE,which utilized the methods of dynamic caching in optical domain and buffer level-aware bandwidth configuration in wireless domain.The results show that the proposed mechanism enhances the quality of user experience and effectively improves the cache hit rate.  相似文献   

14.
综合考虑内容中心网络(CCN)的能耗优化及性能提升,该文提出一种内容中心网络中能耗优化的隐式协作缓存机制。缓存决策时,利用缓存节能量作为判决条件优先在用户远端节点缓存,并利用数据包携带最近上游缓存跳数信息进行隐式协作,减轻用户近端节点缓存空间的竞争压力,提高邻近节点缓存的差异性。缓存替换时,选取缓存节能量最小的缓存内容加以替换,达到最优的能耗优化效果。仿真结果表明,该缓存机制在性能上获得较优的缓存命中率及平均路由跳数,同时有效降低了网络能耗。  相似文献   

15.
Internet service providers(ISPs) have taken some measures to reduce intolerable inter-ISP peer-to-peer(P2P) traffic costs,therefore user experiences of various P2P applications have been affected.The recently emerging offline downloading service seeks to improve user experience by using dedicate servers to cache requested files and provide high-speed uploading.However,with rapid increase in user population,the server-side bandwidth resource of offline downloading system is expected to be insufficient in the near future.We propose a novel complementary caching scheme with the goal of mitigating inter-ISP traffic,alleviating the load on servers of Internet applications and enhancing user experience.Both architecture and caching algorithm are presented in this paper.On the one hand,with full knowledge of P2P file sharing system and offline downloading service,the infrastructure of complementary caching is designed to conveniently be deployed and work together with existing platforms.The co-operational mechanisms among different major components are also included.On the other hand,with in-depth understanding of traffic characteristics that are relevant to caching,we develop complementary caching algorithm with respect to the density of requests,the redundancy of file and file size.Since such relevant information can be real-time captured in our design,the proposed policy can be implemented to guide the storage and replacement of caching unities.Based on real-world traces over 3 months,we demonstrate that the complementary caching scheme is capable to achieve the ’three-win’ objective.That is,for P2P downloading,over 50% of traffic is redirected to cache;for offline downloading,the average server-dependence of tasks drops from 0.71 to 0.32;for user experience,the average P2P transfer rate is increased by more than 50 KB/s.  相似文献   

16.
在软件定义网络(SDN)和内容中心网络(CCN)融合架构下,为了充分利用控制层对网络拓扑和缓存资源的全局感知,在全网中实现缓存资源的优化使用,提出了一种集中控制的缓存决策优化方案.在该方案中,应用粒子群优化算法(PSO)并且根据节点边缘度、节点重要度以及内容流行度对缓存资源和内容进行集中缓存决策,使得内容在不同的节点进行合理的缓存.仿真结果表明,通过评估缓存大小对缓存性能的影响,PSO缓存决策方法取得了比LCE、PROB缓存决策策略更优的缓存命中率和路径延展率,明显降低了缓存节点的缓存替换数,使得缓存达到了整体缓存优化.  相似文献   

17.
GroCoca: group-based peer-to-peer cooperative caching in mobile environment   总被引:3,自引:0,他引:3  
In a mobile cooperative caching environment, we observe the need for cooperating peers to cache useful data items together, so as to improve cache hit from peers. This could be achieved by capturing the data requirement of individual peers in conjunction with their mobility pattern, for which we realized via a GROup-based COoperative CAching scheme (GroCoca). In GroCoca, we define a tightly-coupled group (TCG) as a collection of peers that possess similar mobility pattern and display similar data affinity. A family of algorithms is proposed to discover and maintain all TCGs dynamically. Furthermore, two cooperative cache management protocols, namely, cooperative cache admission control and replacement, are designed to control data replicas and improve data accessibility in TCGs. A cache signature scheme is also adopted in GroCoca in order to provide information for the mobile clients to determine whether their TCG members are likely caching their desired data items and to perform cooperative cache replacement Experimental results show that GroCoca outperforms the conventional caching scheme and standard COoperative CAching scheme (COCA) in terms of access latency and global cache hit ratio. However, GroCoca generally incurs higher power consumption  相似文献   

18.
刘银龙  汪敏  周旭 《通信学报》2015,36(3):187-194
为降低P2P缓存系统中的全局开销,提出一种基于总开销最小的协作缓存策略。该策略综合考虑P2P缓存系统中的传输开销和存储开销,使用跨ISP域间链路开销、流行度、文件大小、存储开销来衡量文件的缓存增益。需要替换时,首先替换掉缓存增益最小的内容。实验结果表明,所提策略能够有效降低系统的总开销。  相似文献   

19.
Cooperative Caching Strategy in Mobile Ad Hoc Networks Based on Clusters   总被引:1,自引:0,他引:1  
In this paper, we present a scheme, called Cluster Cooperative (CC) for caching in mobile ad hoc networks. In CC scheme, the network topology is partitioned into non-overlapping clusters based on the physical network proximity. For a local cache miss, each client looks for data item in the cluster. If no client inside the cluster has cached the requested item, the request is forwarded to the next client on the routing path towards server. A cache replacement policy, called Least Utility Value with Migration (LUV-Mi) is developed. The LUV-Mi policy is suitable for cooperation in clustered ad hoc environment because it considers the performance of an entire cluster along with the performance of local client. Simulation experiments show that CC caching mechanism achieves significant improvements in cache hit ratio and average query latency in comparison with other caching strategies.  相似文献   

20.
介绍了网络缓存发展的趋势:网络缓存协作。说明了网络缓存协作所要解决的问题,列举了网络 缓存协作的几种模型及相应的有代表性的协议,介绍了解决代理裁剪的几种方案。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号