共查询到20条相似文献,搜索用时 62 毫秒
1.
为了提高代理系统的整体性能,基于内部网络用户访问时间的局部性和相似性,并结合现有的分布式缓存系统,本文提出了一种新型的分布式代理缓存系统——双层缓存集群.双层缓存集群系统分为网内集群缓存层和代理集群缓存层,采用双层代理缓存结构,充分利用现有内部网络资源,分散了代理的负担.降低了代理之间的通信开销,还增强了缓存资源的利用... 相似文献
2.
3.
一种基于关联规则的代理缓存模型 总被引:1,自引:0,他引:1
将数据挖掘中关联规则技术应用到代理缓存管理和调度上,提出了一个新颖的基于关联规则的代理缓存模型,同时对该模型系统实现的一些关键技术做了详细的说明,并初步分析了其性能代价。试验表明该模型具有相当的可行性和有效性。 相似文献
4.
随着Internet技术的发展和普遍应用,流媒体技术在Internet上得到了广泛的应用.对流媒体对象的访问,需要高且稳定的传送速率,网络带宽消耗大且持续时间长,容易给其他类型文件的访问带来影响,若用户过多,还会使初始流媒体服务器过载.代理缓存技术可帮助解决上述问题.文中介绍了流媒体代理缓存的特点,流媒体代理缓存的算法,流媒体代理缓存的评价指标和影响流媒体代理缓存效果的因素. 相似文献
5.
一种基于LRU算法改进的缓存方案研究与实现 总被引:1,自引:0,他引:1
LRU(最近最少使用)替换算法在单处理器结构的许多应用中被广泛使用。然而在多处理器结构中,传统LRU算法对降低共享缓存的缺失率并不是最优的。文中研究了基本的缓存块替换算法,在分析LRU算法的基础上,提出基于LRU算法及访问概率改进的缓存方案,综合考虑最近使用次数和访问频率来决定候选的替换块,增强了替换算法对多处理器的适应性。 相似文献
6.
7.
为提高NDN(命名数据网络)中的缓存利用率,提出了一种基于蚁群替换算法的邻居协作缓存管理(ACNCM)策略。首先将单节点的缓存替换问题,建模为0/1背包问题,并根据缓存数据的大小、使用频率以及邻居副本深度等信息定义本地存储内容的缓存价值,提出基于蚁群算法的缓存替换算法。然后利用邻域协作的思想,通过路由节点之间定期交换自身节点的缓存信息,对单个节点替换出去的缓存内容,选择邻居节点完成协作式缓存管理。实验结果表明,ACNCM策略在缓存命中率、网络开销和平均响应时延方面均优于现有方法。 相似文献
8.
9.
缓存替换算法对代理缓存的系统性能起着重要的影响,本文对Web缓存替换算法进行了研究,针对Hybrid算法提出了改进方法。实验结果表明,改进后的算法在保持相对较低的延迟率和较高的URL命中率的情况下,字节命中率有较大的提高,对改善网络状况有一定的意义。 相似文献
10.
11.
The development of proxy caching is essential in the area of video‐on‐demand (VoD) to meet users' expectations. VoD requires high bandwidth and creates high traffic due to the nature of media. Many researchers have developed proxy caching models to reduce bandwidth consumption and traffic. Proxy caching keeps part of a media object to meet the viewing expectations of users without delay and provides interactive playback. If the caching is done continuously, the entire cache space will be exhausted at one stage. Hence, the proxy server must apply cache replacement policies to replace existing objects and allocate the cache space for the incoming objects. Researchers have developed many cache replacement policies by considering several parameters, such as recency, access frequency, cost of retrieval, and size of the object. In this paper, the Weighted‐Rank Cache replacement Policy (WRCP) is proposed. This policy uses such parameters as access frequency, aging, and mean access gap ratio and such functions as size and cost of retrieval. The WRCP applies our previously developed proxy caching model, Hot‐Point Proxy, at four levels of replacement, depending on the cache requirement. Simulation results show that the WRCP outperforms our earlier model, the Dual Cache Replacement Policy. 相似文献
12.
13.
本文提出了ROC(Resist-Overload Capability)缓存接纳策略和替换算法,解决了使用间隔缓存变码率视频服务器的缓存管理问题.确定性缓存接纳策略能提供确定的服务质量,却存在不适应交互应用和缓存利用率低的缺点;统计复用缓存接纳策略需要海量卷积运算,因此缺乏实用性;ROC缓存接纳策略通过简单运算,提供概率的缓存服务质量保证和较高的缓存利用率.仿真结果表明,在典型系统配置下,ROC缓存接纳策略和替换算法可以提高约25%的系统吞吐量;相对确定性缓存接纳策略和STP-L缓存替换算法,可以多服务约17%的视频流,平均缓存利用率也要高出约38%. 相似文献
14.
In this paper, we present a scheme, called Cluster Cooperative (CC) for caching in mobile ad hoc networks. In CC scheme, the
network topology is partitioned into non-overlapping clusters based on the physical network proximity. For a local cache miss,
each client looks for data item in the cluster. If no client inside the cluster has cached the requested item, the request
is forwarded to the next client on the routing path towards server. A cache replacement policy, called Least Utility Value
with Migration (LUV-Mi) is developed. The LUV-Mi policy is suitable for cooperation in clustered ad hoc environment because
it considers the performance of an entire cluster along with the performance of local client. Simulation experiments show
that CC caching mechanism achieves significant improvements in cache hit ratio and average query latency in comparison with
other caching strategies. 相似文献
15.
This paper introduces an adaptive cache proxy to improve the performance of web access in soft real-time applications. It
consists of client proxies and cooperative proxy servers with a server-side pushing schema. The large amount of heterogeneous
data will be stored in the proxy servers and delivered to clients through computer networks to reduce the response time and
network traffic. The adaptive proxy pre-fetches and replaces heterogeneous data dynamically in consideration of networks cost,
data size, data change rate, etc. The simulation results show that the modified LUV algorithm has better performance in terms
of hit rate, byte hit rate, and delay saving rate. With the cooperative proxy caching, it is shown that the performance of
the proxy caching system is more predictable even if the proxies need to deal with a variety of data. The modified adaptive
TTL algorithm has better performance in terms of the combination of temporal coherency and system overheads.
相似文献
Zhubin ZhangEmail: |
16.
17.
基于媒体用户访问行为偏好模型的代理缓存算法 总被引:2,自引:0,他引:2
目前,代理缓存技术广泛应用于改善流媒体传输的服务质量.文章从实际用户日志文件的分析出发,利用发现的用户浏览流媒体对象时的行为分布模型,提出了一种新的视频流媒体缓存算法.仿真结果证明,该算法可以通过记录很少的用户访问信息获取较高的性能表现. 相似文献
18.
Mobile computing is considered of major importance to the computing industry for the forthcoming years due to the progress in the wireless communications area. A proxy-based architecture for accelerating Web browsing in wireless customer premises networks is presented. Proxy caches, maintained in base stations, are constantly relocated to follow the roaming user. A cache management scheme is proposed, which involves the relocation of full caches to the most probable cells but also percentages of the caches to less likely neighbors. Relocation is performed according to the output of a user movement prediction algorithm based on a learning automaton. The simulation of the scheme shows considerable benefits for the end user. 相似文献
19.
20.
Abdullah Abonamah Akram Al‐Rawi Mohammad Minhaz 《International Journal of Communication Systems》2002,15(6):513-530
Network caching of objects has become a standard way of reducing network traffic and latency in the web. However, web caches exhibit poor performance with a hit rate of about 30%. A solution to improve this hit rate is to have a group of proxies form co‐operation where objects can be cached for later retrieval. A co‐operative cache system includes protocols for hierarchical and transversal caching. The drawback of such a system lies in the resulting network load due to the number of messages that need to be exchanged to locate an object. This paper proposes a new co‐operative web caching architecture, which unifies previous methods of web caching. Performance results shows that the architecture achieve up to 70% co‐operative hit rate and accesses the cached object in at most two hops. Moreover, the architecture is scalable with low traffic and database overhead. Copyright © 2002 John Wiley & Sons, Ltd. 相似文献