首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Hash routing is an emerging approach to coordinating a collection of collaborative proxy caches. Hash routing partitions the entire URL space among the proxy caches. Each partition is assigned to a cache server. Duplication of cache contents is eliminated. Client requests to a cache server for non-assigned-partition objects are forwarded to proper sibling caches. In the presence of access skew, the load level of the cache servers can be quite unbalanced, limiting the benefits of hash routing.We examine an adaptable controlled replication (ACR) of non-assigned-partition objects in each cache server to reduce the load imbalance and relieve the problem of hot-spot references. Trace-driven simulations are conducted to study the effectiveness of ACR. The results show that (1) access skew exists, and the load of the cache servers tends to be unbalanced in hash routing; (2) with a relatively small amount of ACR, say 10% of the cachesize, significant improvements in load balance can be achieved; (3) ACR provides a very effective remedy for load imbalance due to hot-spot references; and (4) increasing the cache size does not improve load balance unless replication is allowed.  相似文献   

2.
结合访存失效队列状态的预取策略   总被引:1,自引:0,他引:1  
随着存储系统的访问速度与处理器的运算速度的差距越来越显著,访存性能已成为提高计算机系统性能的瓶颈.通过对指令Cache和数据Cache失效行为的分析,提出一种预取策略--结合访存失效队列状态的预取策略.该预取策略保持了指令和数据访问的次序,有利于预取流的提取.并将指令流和数据流的预取相分离,避免相互替换.在预取发起时机的选择上,不但考虑当前总线是否空闲,而且结合访存失效队列的状态,减小对处理器正常访存请求的影响.通过流过滤机制提高预取准确性,降低预取对访存带宽的需求.结果表明,采用结合访存失效队列状态的预取策略,处理器的平均访存延时减少30%,SPEC CPU2000程序的IPC值平均提高8.3%.  相似文献   

3.
The randomly and unpredictable user behaviors during a multimedia presentation may cause the long retrieval latency in the client–server connection. To accommodate the above problem, we propose a prefetching scheme that using the association rules from the data mining technique. The data mining technique can provide some priority information such as the support, confidence, and association rules which can be utilized for prefetching continuous media. Thus, using the data mining technique, the proposed prefetching policy can predict user behaviors and evaluate segments that may be accessed in near future. The proposed prefetching scheme was implemented and tested on synthetic data to estimate its effectiveness. Performance experiments show that the proposed prefetching scheme is effective in improving the latency reduction, even for small cache sizes.  相似文献   

4.
现有的ULC机制可有效减少多级缓存的数据冗余,并解决存储服务器端缓存访问的局部性较弱问题,但在存储服务器连接多个应用服务器的情况下,现有ULC在分配缓存容量时不能使存储服务器端缓存资源的边际收益最大化。为此,提出一种多应用共享缓存的二级缓存动态分配策略MG—ULC。该策略以ULC机制为基础,给出以边际增益为考虑因素的缓存分配的理论依据,并根据各应用的访问模式在二级缓存的边际增益动态分配缓存容量。实验结果表明,随着各应用服务器访问模式的变化,MG—ULC能比ULC更合理地分配二级缓存,从而达到更高的缓存利用率。  相似文献   

5.
Main memory cache performance continues to play an important role in determining the overall performance of object-oriented, object-relational and XML databases. An effective method of improving main memory cache performance is to prefetch or pre-load pages in advance to their usage, in anticipation of main memory cache misses. In this paper we describe a framework for creating prefetching algorithms with the novel features of path and cache consciousness. Path consciousness refers to the use of short sequences of object references at key points in the reference trace to identify paths of navigation. Cache consciousness refers to the use of historical page access knowledge to guess which pages are likely to be main memory cache resident most of the time and then assumes these pages do not exist in the context of prefetching. We have conducted a number of experiments comparing our approach against four highly competitive prefetching algorithms. The results shows our approach outperforms existing prefetching techniques in some situations while performing worse in others. We provide guidelines as to when our algorithm should be used and when others maybe more desirable.  相似文献   

6.
集中管理式Web缓存系统及性能分析   总被引:5,自引:0,他引:5  
共享缓存文件是减少网络通信量和服务器负载的重要方法,本文在介绍Web Caching技术及流行的Web缓存通信协议ICP的基础上,提出了一种集中管理式Web缓存系统,该系统通过将用户的HTTP请求,按照一定的算法分发到系统中某一合适的缓存服务器上,从而消除了缓存系统内部服务器之间庞大的通信开销及缓存处理负担,减少了缓存内容的冗余度.通过分析,证明了集中管理式Web缓存系统比基于ICP的简单缓存系统具有缓存效率高、处理开销低、延迟小等优点,并且该系统具有良好的可扩展性.  相似文献   

7.
为了挖掘出域名服务器一段时间内最大查询量的若干个域名,针对很多负荷重的域名服务器一般都不打开查询日志开关,从而不能采用统计日志记录方法的情况,提出了一个内存记录置换统计算法,在内存中近似统计出一段时间内最大查询量的若干个域名。实验和实践表明该算法效果良好。  相似文献   

8.
一种基于分散协作的Web缓存集群体系结构   总被引:1,自引:0,他引:1  
Web对象缓存技术是一种减少Web访问通信量和访问延迟的重要手段,该文通过分析现有的各种Web缓存系统,提出了一种基于分散协作的Web缓存集群体系结构。该体系结构克服了集中式系统需要额外配备一台管理服务器的缺陷,消除了管理服务器瓶颈失效造成系统瘫痪的危险,减少由于管理服务器带来的延迟;同时消除了分散系统的缓存不命中情况下的多级转发的延迟和缓存内容重叠,提高了资源利用率和系统效率,具有良好的可扩展性和健壮性。  相似文献   

9.
基于网络性能的智能Web加速技术——缓存与预取   总被引:8,自引:0,他引:8  
Web业务在网络业务中占有很大比重,在无法扩大网络带宽时,需要采取一定技术合理利用带宽,改善网络性能。研究了基于RTT(round trip time)等网络性能指标的Web智能加速技术,在对Web代理服务器上的业务进行分析和对网络RTT进行测量分析的基础上,提出了智能预取控制技术及新的缓存(cache)替换方法。对新算法的仿真研究表明,该方法提高了缓存的命中率。研究表明预取技术在不明显增加网络负荷的前提下,提高了业务的响应速度,有效地改进了Web访问性能。  相似文献   

10.
随着Internet技术的发展,传统的WebGIS服务器在面对大用户群高并发访问时会出现服务延迟甚至拒绝服务的现象.针对该问题,本文提出了一种基于云平台的应对高并发的WebGIS服务器架构.架构使用云平台为WebGIS服务器提供弹性计算和存储资源,并从负载均衡、缓存设计、数据库集群三方面缓解高并发瓶颈.选择开源服务器端软件GeoServer作为WebGIS应用实验部署.实验数据表明使用缓存机制明显降低了WebGIS服务响应时间.与单物理服务器相比,云平台WebGIS服务器集群架构能够有效处理高并发请求,且随着集群规模扩展,云平台WebGIS系统能够获得良好的加速比.  相似文献   

11.
基于因特网的代理缓存技术是解决Web访问速度慢、服务器负载重和网络阻塞等问题的一种主要和有效的技术。为了能设计出有效、可扩展、健壮、自适应和稳定的代理缓存方案,本文主要对代理缓存的一致性策略、替换策略、体系结构、缓存内容选择和预取等关键技术问题进行研究,并给出了相关技术的解决方案。  相似文献   

12.
In this paper, a distributed Web and cache server called MOWS is described. MOWS is written in Java and built from modules that can be loaded locally or remotely. These modules implement various features of Web and cache servers and enable MOWS to run as a cluster of distributed Web servers. In addition to its distributed nature, MOWS can integrate external services using its own external interface. Java programs conforming to this interface can be loaded locally or remotely and executed at the server. The resulting system will potentially provide effective Web access by both utilizing commonly available computing resources and offering distributed server functionality. Design considerations and the system architecture of MOWS are described and several applications of MOWS are described to show the benefits of MOWS.  相似文献   

13.
Network continuous-media applications are emerging with a great pace. Cache memories have long been recognized as a key resource (along with network bandwidth) whose intelligent exploitation can ensure high performance for such applications. Cache memories exist at the continuous-media servers and their proxy servers in the network. Within a server, cache memories exist in a hierarchy (at the host, the storage-devices, and at intermediate multi-device controllers). Our research is concerned with how to best exploit these resources in the context of continuous media servers and in particular, how to best exploit the available cache memories at the drive, the disk array controller, and the host levels. Our results determine under which circumstances and system configurations it is preferable to devote the available memory to traditional caching (a.k.a. data sharing) techniques as opposed to prefetching techniques. In addition, we show how to configure the available memory for optimal performance and optimal cost. Our results show that prefetching techniques are preferable for small-size caches (such as those expected at the drive level). For very large caches (such as those employed at the host level) caching techniques are preferable. For intermediate cache sizes (such as those at multi-device controllers) a combination of both strategies should be employed.  相似文献   

14.
集群会话同步技术目前主要有如下几种:基于Cookie的会话同步,基于数据库的会话同步,基于分布式缓存的会话同步.在以上会话同步技术的基础上,提出了一种基于集群节点间即时拷贝的会话同步技术,将会话标识信息存放在客户端中,避免了在客户端中存放完整的会话信息,从而可防止用户身份信息暴露的安全隐患;同时,在客户端向服务器发送请求的过程中,只携带会话标识信息,而不是完整的会话信息,传输的数据量将大大减少,提高了客户端对服务器的访问效率;各集群节点间同步会话信息,不需要从数据库中获取会话信息,避免了频繁使用数据库带来的性能瓶颈;也不需要使用专门的会话缓存服务器,减低了开发和部署成本,并具有良好的应用前景.  相似文献   

15.
Data access delay is a major bottleneck in utilizing current high-end computing(HEC)machines.Prefetch- ing,where data is fetched before CPU demands for it,has been considered as an effective solution to masking data access delay.However,current client-initiated prefetching strategies,where a computing processor initiates prefetching instructions,have many limitations.They do not work well for applications with complex,non-contiguous data access patterns.While technology advances continue to increase the gap between computing and data access performance, trading computing power for reducing data access delay has become a natural choice.In this paper,we present a server- based data-push approach and discuss its associated implementation mechanisms.In the server-push architecture,a dedicated server called Data Push Server(DPS)initiates and proactively pushes data closer to the client in time.Issues, such as what data to fetch,when to fetch,and how to push are studied.The SimpleScalar simulator is modified with a dedicated prefetching engine that pushes data for another processor to test DPS based prefetching.Simulation results show that L1 Cache miss rate can be reduced by up to 97%(71% on average)over a superscalar processor for SPEC CPU2000 benchmarks that have high cache miss rates.  相似文献   

16.
数据收集是移动数据库中的一个关键问题,将数据收集到移动客户机的缓存中,使客户机在断接期间使用本地数据自主操作,提高了数据的可用性。文章提出了根据移动事务相关图的事务序列进行收集和淘汰的收集方法,以属性—事务片段存取集和元组—事务片段存取集组成收集单元,以此为收集粒度,方便收集,并节省空间。解决了数据收集中收集内容与收集单元的问题。  相似文献   

17.
随着计算机技术和网络技术的不断进步,视频点播服务已经逐渐变成现实,基于协作缓存的视频点播系统是一个分布式结构,中心集群存放影片数据,本地集群缓存数据并提供视频点播服务,具有很好的可扩展性。该文针对VOD系统的协作缓存,提出了静态调度和动态迁移相结合的调度策略,静态调度能够根据影片数据的缓存分布情况,实时调度服务请求,并同时考虑多个VOD服务器的负载动态平衡;动态任务迁移能对服务流分布进行实时分析,并根据服务数据本地化的原则进行服务迁移,进一步提高了协作缓存的命中率。该文阐述了基于协作缓存的视频点播系统的拓扑结构,对静态调度和动态迁移进行了详细设计,并给出了相应的形式化表示。  相似文献   

18.
提出了集群服务器并行网页预取模型,模型采用了马尔科夫链分析访问路径并在Web集群服务器的各节点上并行预取页面,把集群技术的高性能和高可靠性与预取技术的快速响应能力结合起来。实验表明,将此模型应用于集群服务器的分发器上,服务器系统具有更高的请求命中率和更大的吞吐量。  相似文献   

19.
不同的Cache预取策略适用于不同的存取模式。本文介绍了存储系统Cache预取技术的研究现状,从分析存取模式出发,构造了存取模式三元组模型,并在磁盘阵列上测试了适 用于复杂环境下的Cache预取自适应策略,结果证明,自适应策略能够在不同环境上获得磁盘阵列的最优性能。  相似文献   

20.
Multiple prefetch adaptive disk caching   总被引:1,自引:0,他引:1  
A new disk caching algorithm is presented that uses an adaptive prefetching scheme to reduce the average service time for disk references. Unlike schemes which simply prefetch the next sector or group of sectors, this method maintains information about the order of past disk accesses which is used to accurately predict future access sequences. The range of parameters of this scheme is explored, and its performance is evaluated through trace-driven simulation, using traces obtained from three different UNIX minicomputers. Unlike disk trace data previously described in the literature, the traces used include time stamps for each reference. With this timing information-essential for evaluating any prefetching scheme-it is shown that a cache with the adaptive prefetching mechanism can reduce the average time to service a disk request by a factor of up to three, relative to an identical disk cache without prefetching  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号