首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 78 毫秒
1.
网络缓存协作的实现方法   总被引:1,自引:0,他引:1  
介绍了网络缓存发展的趋势:网络缓存协作.说明了网络缓存协作所要解决的问题,列举了网络缓存协作的几种模型及相应的有代表性的协议,介绍了解决代理裁剪的几种方案.  相似文献   

2.
《中兴通讯技术》2015,(4):58-62
介绍一种有效支持缓存协作的未来网络体系架构:智慧协同网络,然后提出了一种高效的协作缓存机制,称为Co Lo RCache。Co Lo RCache的主要目标是减小缓存冗余和建立缓存共享机制。我们通过仿真结果来验证Co Lo RCache。仿真数据表明,相比较于其他缓存机制,Co Lo RCache能够产生更高的缓存命中率和有着最小的请求命中距离。  相似文献   

3.
为提高NDN(命名数据网络)中的缓存利用率,提出了一种基于蚁群替换算法的邻居协作缓存管理(ACNCM)策略。首先将单节点的缓存替换问题,建模为0/1背包问题,并根据缓存数据的大小、使用频率以及邻居副本深度等信息定义本地存储内容的缓存价值,提出基于蚁群算法的缓存替换算法。然后利用邻域协作的思想,通过路由节点之间定期交换自身节点的缓存信息,对单个节点替换出去的缓存内容,选择邻居节点完成协作式缓存管理。实验结果表明,ACNCM策略在缓存命中率、网络开销和平均响应时延方面均优于现有方法。  相似文献   

4.
综合考虑内容中心网络(CCN)的能耗优化及性能提升,该文提出一种内容中心网络中能耗优化的隐式协作缓存机制。缓存决策时,利用缓存节能量作为判决条件优先在用户远端节点缓存,并利用数据包携带最近上游缓存跳数信息进行隐式协作,减轻用户近端节点缓存空间的竞争压力,提高邻近节点缓存的差异性。缓存替换时,选取缓存节能量最小的缓存内容加以替换,达到最优的能耗优化效果。仿真结果表明,该缓存机制在性能上获得较优的缓存命中率及平均路由跳数,同时有效降低了网络能耗。  相似文献   

5.
多代理强化学习能够将目标内容看做一个整体,可以同时进行多个类型目标的协同管理,研究基于多代理强化学习的边缘网络资源协作缓存方法。构建动态传输的边缘网络缓存模型,基于多代理强化学习定义选择动作,贪婪算法设定网络资源缓存放置路径,最佳响应协作缓存边缘网络资源,完成基于多代理强化学习的边缘网络资源协作缓存方法设计。实验结果:选择两组不同运行时段的边缘网络资源请求数据,本文方法能够实现不同数据大小之间的协同缓存,且在资源数据大小为8G时,所用的缓存时间能够控制在10s之内,较比传统方法缩短了120s,能够有效解决网络拥挤的现象,具有实际应用效果。  相似文献   

6.
基于节点介数和替换率的内容中心网络网内缓存策略   总被引:2,自引:0,他引:2  
网内缓存技术是内容中心网络(CCN)的关键技术之一,CCN采用传统的ALWAYS缓存策略,会造成较大冗余。改进的Betw方案仅考虑了节点介数,容易造成高介数节点缓存更替频繁,内容可用性下降。为了解决这个问题,该文提出一种综合使用网络节点介数和节点缓存内容更替速率作为缓存决策度量的新型网内缓存策略BetwRep,通过权衡节点位置重要性和缓存内容时效性实现回传内容的最佳放置。最后,基于ndnSIM平台进行的网络仿真表明,该文提出的BetwRep缓存策略取得了比Betw方案和ALWAYS方案更低的源端请求负载和更少的平均跳数。  相似文献   

7.
提出一种新颖的基于可重构路由器上缓存的协作分发策略来加速流媒体。通过网络存储即多个边缘路由器节点对热点视频数据进行合作缓存,就近为用户提供服务,从而使得流媒体服务器的性能要求尤其是带宽需求得到巨大的降低,骨干网传输的流量也明显减少,同时用户响应延迟也得到明显的改善。此外,实现了一个原型系统来评价基于路由器上缓存的流媒体协作分发策略的性能,结果表明该方案相比于现有的方案在改善网络性能以及用户体验方面取得很大的提升。  相似文献   

8.
针对如何提高内容中心网络网内缓存性能的问题,提出一种基于层次划分的轻量协作的缓存存储策略。该策略通过Interest分组、Data分组以及路由器本地PIT表三者的协作把内容划分为多种优先级层级,使不同内容缓存在沿途的不同路由器。实验证明该策略可以有效地减少访问跳数,提高平均缓存命中率,降低服务器负载。  相似文献   

9.
简介了WDM光分组交换网络中常用的几种光缓存技术,提出一种新的部分共享光缓存技术,并建立了数学模型,通过数值仿真的方法对其进行了性能分析,同时与传统的输出缓存技术进行了比较。  相似文献   

10.
葛天 《电视技术》2022,46(2):149-151
5G技术的应用,改善了网络尤其是移动网络的使用效果,使活跃于网络中的数据信息以指数级的速度增长,而现有的信息传输网络很难完全满足5G技术应用的信息传输与显示要求,容易出现诸如网络拥塞、时延较大、清晰度不够等问题,影响了用户的网络体验。对此,基于当前5G技术应用的实际,提出一种纵向协作的缓存技术方案,通过将终端、基站、核心网进行纵向位置的部署,来实现纵向协作缓存的目的,从而提高5G技术应用中的网络传输与网络存储能力,改善用户使用网络应用的体验。  相似文献   

11.
Edge caching is an effective feature of the next 5G network to guarantee the availability of the service content and a reduced time response for the user. However, the placement of the cache content remains an issue to fully take advantage of edge caching. In this paper, we address the proactive caching problem in Heterogeneous Cloud Radio Access Network (H‐CRAN) from a game theoretic point of view. The problem is formulated as a bargaining game where the remote radio heads (RRHs) dynamically negotiate and decide which content to cache in which RRH under energy saving and cache capacity constraints. The Pareto optimal equilibrium is proved for the cooperative game by the iterative Nash bargaining algorithm. We compare between cooperative and noncooperative proactive caching games and demonstrate how the selfishness of different players can affect the overall system performance. We also showed that our cooperative proactive caching game improves the energy consumption of 40% as compared with noncooperative game and of 68% to no‐game strategy. Moreover, the number of satisfied requests at the RRHs with the proposed cooperative proactive caching scheme is significantly increased.  相似文献   

12.
Aiming at the problem of mass data content transmission and limited wireless backhaul resource of UAV in UAV-assisted cellular network,a cooperative caching algorithm for cache-enabled UAV and user was proposed.By deploying caches on UAV and user device,the popular content requested by user was cached and delivered,which alleviated the backhaul resource and energy consumption of UAV,reduced the traffic load and user delay.A joint optimization problem of UAV and user caching was established with the goal of minimizing user content acquisition delay,and decomposed into UAV caching sub-problem and user caching sub-problem,which were solved based on alternating direction method of multiplier and global greedy algorithm respectively.The iterative way was used to obtain convergent optimization result,and the cooperative caching of UAV and user was realized.Simulation results show that the proposed algorithm can effectively reduce user content acquisition delay and improve system performance.  相似文献   

13.
Wireless mesh networks (WMNs) have been proposed to provide cheap, easily deployable and robust Internet access. The dominant Internet-access traffic from clients causes a congestion bottleneck around the gateway, which can significantly limit the throughput of the WMN clients in accessing the Internet. In this paper, we present MeshCache, a transparent caching system for WMNs that exploits the locality in client Internet-access traffic to mitigate the bottleneck effect at the gateway, thereby improving client-perceived performance. MeshCache leverages the fact that a WMN typically spans a small geographic area and hence mesh routers are easily over-provisioned with CPU, memory, and disk storage, and extends the individual wireless mesh routers in a WMN with built-in content caching functionality. It then performs cooperative caching among the wireless mesh routers.We explore two architecture designs for MeshCache: (1) caching at every client access mesh router upon file download, and (2) caching at each mesh router along the route the Internet-access traffic travels, which requires breaking a single end-to-end transport connection into multiple single-hop transport connections along the route. We also leverage the abundant research results from cooperative web caching in the Internet in designing cache selection protocols for efficiently locating caches containing data objects for these two architectures. We further compare these two MeshCache designs with caching at the gateway router only.Through extensive simulations and evaluations using a prototype implementation on a testbed, we find that MeshCache can significantly improve the performance of client nodes in WMNs. In particular, our experiments with a Squid-based MeshCache implementation deployed on the MAP mesh network testbed with 15 routers show that compared to caching at the gateway only, the MeshCache architecture with hop-by-hop caching reduces the load at the gateway by 38%, improves the average client throughput by 170%, and increases the number of transfers that achieve a throughput greater than 1 Mbps by a factor of 3.  相似文献   

14.
In order to optimize the replica placement in information centric networking,an edge-first-based cooperative caching strategy (ECCS) was proposed.According to the strategy,cache decision was made during the interest forwarding stage.The decision result and statistic information would been forwarded to upstream routers step by step.Utilizing the information,upstream nodes could update their cache information table immediately to achieve cooperative caching.The experimental results indicate ECCS can achieve salient performance gain in terms of server load reduction ratio,average hop reduction ratio,average cache hit ratio,compared with current strategies.  相似文献   

15.
Performance enhancing proxies (PEPs) are widely used to improve the performance of TCP over high delay‐bandwidth product links and links with high error probability. In this paper we analyse the performance of using TCP connection splitting in combination with web caching via traces obtained from a commercial satellite system. We examine the resulting performance gain under different scenarios, including the effect of caching, congestion, random loss and file sizes. We show, via analysing our measurements, that the performance gain from using splitting is highly sensitive to random losses and the number of simultaneous connections, and that such sensitivity is alleviated by caching. On the other hand, the use of a splitting proxy enhances the value of web caching in that cache hits result in much more significant performance improvement over cache misses when TCP splitting is used. We also compare the performance of using different versions of HTTP in such a system. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

16.
Network caching of objects has become a standard way of reducing network traffic and latency in the web. However, web caches exhibit poor performance with a hit rate of about 30%. A solution to improve this hit rate is to have a group of proxies form co‐operation where objects can be cached for later retrieval. A co‐operative cache system includes protocols for hierarchical and transversal caching. The drawback of such a system lies in the resulting network load due to the number of messages that need to be exchanged to locate an object. This paper proposes a new co‐operative web caching architecture, which unifies previous methods of web caching. Performance results shows that the architecture achieve up to 70% co‐operative hit rate and accesses the cached object in at most two hops. Moreover, the architecture is scalable with low traffic and database overhead. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

17.
在当前Web服务动态组合研究的基础上,提出了一种基于Sub Web Service的混合Web服务组合方法。结合静态与动态服务组合方法,将动态生成的服务组合计划描述保存在缓冲池中,当用户使用该组合服务时,系统将会在缓冲池中搜寻此服务组合,并为请求调用它。在组合算法中提出了基于Sub Web Service的组合算法,将多输入多输出的Web服务分解为多输入单输出的Sub Web服务执行服务组合。在避免重复动态组合服务的基础上,减轻了Web服务在组合过程中对多输出接口的依赖性,提高了系统服务效率。  相似文献   

18.
This paper introduces an adaptive cache proxy to improve the performance of web access in soft real-time applications. It consists of client proxies and cooperative proxy servers with a server-side pushing schema. The large amount of heterogeneous data will be stored in the proxy servers and delivered to clients through computer networks to reduce the response time and network traffic. The adaptive proxy pre-fetches and replaces heterogeneous data dynamically in consideration of networks cost, data size, data change rate, etc. The simulation results show that the modified LUV algorithm has better performance in terms of hit rate, byte hit rate, and delay saving rate. With the cooperative proxy caching, it is shown that the performance of the proxy caching system is more predictable even if the proxies need to deal with a variety of data. The modified adaptive TTL algorithm has better performance in terms of the combination of temporal coherency and system overheads.
Zhubin ZhangEmail:
  相似文献   

19.
一种在线的动态网页分块缓存方法   总被引:1,自引:1,他引:0       下载免费PDF全文
尤朝  周明辉  林泊  曹东刚  梅宏 《电子学报》2009,37(5):1087-1091
 分块缓存技术能够有效提高动态网页的服务质量.现有的既存系统较少使用分块缓存技术设计,如何将其应用于这些系统是一个很大的挑战.本文提出了一种在线的动态网页分块缓存方法,使原系统演化成基于分块的系统,为用户服务.该方法具有三方面优点:(1)使原系统在线演化,不影响系统对用户的服务提供;(2)简化了模板的维护,使逻辑执行的粒度从页面降低到分块,减轻了服务器端的压力;(3)独立于原系统,有效支持系统的变化和升级.文章最后对方法进行了实现和评估,结果说明该方法能够较好实现系统的演化,提高系统的服务质量.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号