首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Caching has been intensively used in memory and traditional file systems to improve system performance. However, the use of caching in parallel file systems and I/O libraries has been limited to I/O nodes to avoid cache coherence problems. We specify an adaptive cache coherence protocol that is very suitable for parallel file systems and parallel I/O libraries. This model exploits the use of caching, both at processing and I/O nodes, providing performance improvement mechanisms such as aggressive prefetching and delayed-write techniques. The cache coherence problem is solved by using a dynamic scheme of cache coherence protocols with different sizes and shapes of granularity. The proposed model is very appropriate for parallel I/O interfaces, such as MPI-IO. Performance results, obtained on an IBM SP2, are presented to demonstrate the advantages offered by the cache management methods proposed.  相似文献   

2.
查询结果缓存可以对查询结果的文档标识符集合或者实际的返回页面进行缓存,以提高用户查询的响应速度,相应的缓存形式可以分别称之为标识符缓存或页面缓存。对于固定大小的内存,标识符缓存可以获得更高的命中率,而页面缓存可以达到更高的响应速度。该文根据用户查询访问的时间局部性和空间局部性,提出了一种新颖的基于时空局部性的层次化结果缓存机制。首先,该机制将固定大小的结果缓存划分为两层:页面缓存和标识符缓存。对于用户提交的查询,该机制会首先使用第一层的页面缓存进行应答,如果未能命中,则继续尝试使用第二层的标识符缓存。实验显示这种层次化的缓存机制较传统的仅依赖于单一缓存形式的机制,在平均查询响应时间上,取得了可观的性能提升:例如,相对单纯的页面缓存,平均达到9%,最好情况下达到11%。其次,该机制在标识符缓存的基础上,设计了一种启发式的预取策略,对用户查询检索的空间局部性进行挖掘。实验显示,这种预取策略的融合,能进一步促进检索系统性能的有效提升,从而最终建立起一套时空完备的、有效的结果缓存机制。  相似文献   

3.
The mobile computing environment is receiving increasing attention recently. We consider a mobile environment in which a collection of mobile clients accesses a stationary database server via a wireless channel. Due to the limited bandwidth of a wireless channel and the instability of the wireless network, caching of frequently accessed data items in a client's local storage becomes especially important for improving the performance and data availability of data access queries. In this paper, we discuss the limitations of existing caching mechanisms in a mobile environment and investigate issues that need to be addressed. We propose an adaptive caching model that could cope with the nature of a mobile environment and the low-bandwidth wireless media, supporting fast data access. We describe the adaptive cache replacement and refresh mechanisms; explain the implementation in the context of object-oriented databases; and illustrate the results of some exploratory experiments to demonstrate the feasibility of the mechanisms.  相似文献   

4.
Document caching and connection caching are extensively studied problems. In document caching, one has to maintain caches containing documents accessible in a network. In connection caching, one has to maintain a set of open network connections that handle data transfer. Previous work investigated these two problems separately while in practice the problems occur together: In order to load a document, one has to establish a connection between network nodes if the required connection is not already open. In this paper we present the first study that integrates document and connection caching. We first consider a very basic model in which all documents have the same size and the cost of loading a document or establishing a connection is equal to 1. We present deterministic and randomized online algorithms that achieve nearly optimal competitive ratios unless the size of the connection cache is extremely small. We then consider general settings where documents have varying sizes. We investigate a FAULT model in which the loading cost of a document is 1 as well as a BIT model in which the loading cost is equal to the size of the document.  相似文献   

5.
The sharing of caches among proxies is an important technique to reduce Web traffic, alleviate network bottlenecks, and improve response time of document requests. Most existing work on cooperative caching has been focused on serving misses collaboratively. Very few have studied the effect of cooperation on document placement schemes and its potential enhancements on cache hit ratio and latency reduction. We propose a new document placement scheme which takes into account the contentions at individual caches in order to limit the replication of documents within a cache group and increase document hit ratio. The main idea of this new scheme is to view the aggregate disk space of the cache group as a global resource of the group and uses the concept of cache expiration age to measure the contention of individual caches. The decision of whether to cache a document at a proxy is made collectively among the caches that already have a copy of this document. We refer to this new document placement scheme as the Expiration Age-based scheme (EA scheme). The EA scheme effectively reduces the replication of documents across the cache group, while ensuring that a copy of the document always resides in a cache where it is likely to stay for the longest time. We report our study on the potentials and limits of the EA scheme using both analytic modeling and trace-based simulation. The analytical model compares and contrasts the existing (ad hoc) placement scheme of cooperative proxy caches with our new EA scheme and indicates that the EA scheme improves the effectiveness of aggregate disk usage, thereby increasing the average time duration for which documents stay in the cache. The trace-based simulations show that the EA scheme yields higher hit rates and better response times compared to the existing document placement schemes used in most of the caching proxies.  相似文献   

6.
In a mobile computing environment, database servers disseminate information to multiple mobile clients via wireless channels. Due to the low bandwidth and low reliability of wireless channels, it is important for a mobile client to cache its frequently accessed database items into its local storage. This improves performance of database queries and improves availability of database items for query processing during disconnection. In this paper, we investigate issues on caching granularity, coherence strategy, and replacement policy of caching mechanisms for a mobile environment utilizing point-to-point communication paradigm.We first illustrate that page-based caching is not suitable in the mobile context due to the lack of locality among database items. We propose three different levels of caching granularity: attribute caching, object caching, and hybrid caching, a hybrid approach of attribute and object caching. Next, we show that existing coherence strategies are inappropriate due to frequent disconnection in a mobile environment, and propose a cache coherence strategy, based on the update patterns of database items. Via a detail simulation model, we examine the performance of various levels of caching granularity with our cache coherence strategy. We observe, in general, that hybrid caching could achieve a better performance. Finally, we propose several cache replacement policies that can adapt to the access patterns of database items. For each given caching granularity, we discover that our replacement policies outperform conventional ones in most situations.  相似文献   

7.
Much research has focused on caching adaptive videos to improve system performance for heterogeneous clients with diverse access bandwidths. However, existing rate-adaptive caching systems, which are based on layered coding or transcoding, often suffer from a coarse adaptation and/or a high computation overhead. In this paper, we propose an innovative rate-adaptive caching framework that enables low-cost and fine-grained adaptation by using MPEG-4 fine-grained scalable videos. The proposed framework is both network-aware and media-adaptive; i.e., the clients can be of heterogeneous streaming rates, and the backbone bandwidth consumption can be adaptively controlled. We develop efficient cache management schemes to determine the best contents to cache and the optimal streaming rate to each client under the framework. We demonstrate via simulations that, compared to nonadaptive caching, the proposed framework with the optimal cache management not only achieves a significant reduction in the data transmission cost, but also enables a flexible utility assignment for the heterogeneous clients. Our results also show that the framework maintains a low computational overhead, which implies that it is practically deployable.  相似文献   

8.
In information-centric networking, in-network caching has the potential to improve network efficiency and content distribution performance by satisfying user requests with cached content rather than downloading the requested content from remote sources. In this respect, users who request, download, and keep the content may be able to contribute to in-network caching by sharing their downloaded content with other users in the same network domain (i.e., user-assisted in-network caching). In this paper, we examine various aspects of user-assisted in-network caching in the hopes of efficiently utilizing user resources to achieve in-network caching. Through simulations, we first show that user-assisted in-network caching has attractive features, such as self-scalable caching, a near-optimal cache hit ratio (that can be achieved when the content is fully cached by the in-network caching) based on stable caching, and performance improvements over in-network caching. We then examine the caching strategy of user-assisted in-network caching. We examine three caching strategies based on a centralized server that maintains all content availability information and informs each user of what to cache. We also examine three caching strategies based on each user’s content availability information. We first show that the caching strategy affects the distribution of upload overhead across users and the number of cache hits in each segment. One interesting observation is that, even with a small storage space (i.e., 0.1% of the content size per user), the centralized and distributed approaches improve the cache hit ratio by 50% and 45%, respectively. With an overall view of caching information, the centralized approach can achieve a higher cache hit ratio than the distributed approach. Based on this observation, we discuss a distributed approach with a larger view of caching information than the distributed approach and, through simulations, confirm that a larger view leads to a higher cache hit ratio. Another interesting observation is that the random distributed strategy yields comparable performance to more complex strategies.  相似文献   

9.
Because Internet access rates are highly heterogeneous, many video content providers today make available different versions of the videos, with each version encoded at a different rate. Multiple video versions, however, require more server storage and may also dramatically impact cache performance in a traditional cache or in a CDN server. An alternative to versions is layered encoding, which can also provide multiple quality levels. Layered encoding requires less server storage capacity and may be more suitable for caching; but it typically increases transmission bandwidth due to encoding overhead. In this paper we compare video streaming of multiple versions with that of multiple layers in a caching environment. We examine caching and distribution strategies that use both versions and layers. We consider two cases: the request distribution for the videos is known a priori; and adaptive caching, for which the request distribution is unknown. Our analytical and simulation results indicate that mixed distribution/caching strategies provide the best overall performance.A shorter version of this work has appeared in Proc. of IEEE International Conference on Multimedia and Expo (ICME), Vol. 2, pages 45–48, Lausanne, Switzerland, August 2002  相似文献   

10.
提出一个新的 Web Caching结构模型—基于内容的 Web Caching.模型综合考虑了 Proxy的操作信息和 Web文档的内容特性 ,界定了虚拟用户团体和 Proxy个性 ,并利用 Ontology技术来刻画 Proxy的个性 ,模拟实验表明 ,结合内容属性可以使得 Web Caching性能得到进一步提高  相似文献   

11.
自适应的数据库查询缓存   总被引:1,自引:0,他引:1  
传统的缓存采取较为机械的管理方法,不能随数据库运行的动态信息调整自身参数以得到更优的性能。数据库语义缓存能够让数据库“理解”查询语义,可以为数据库的动态调节提供信息。而查询缓存是语义缓存的一种,在 SQL解析与查询执行之间,通过研究查询缓存的自主管理来提高数据库的查询性能。首先介绍了数据库常用的语义缓存与自主计算,然后对查询缓存进行了形式化定义,并提出了自适应的查询缓存模型。最后在MySQL的查询缓存上进行了实验,得到了较好的效果。  相似文献   

12.
现有的Web缓存器的实现主要是基于传统的内存缓存算法,由于Web业务请求的异质性,传统的替换算法不能在Web环境中有效工作。研究了Web缓存替换操作的依据,分析了以往替换算法的不足,考虑到Web文档的大小、访问代价、访问频率、访问兴趣度以及最近一次被访问的时间对缓存替换的影响,提出了Web缓存对象角色的概念,建立了一种新的基于对象角色的高精度Web缓存替换算法(ORB算法);并以NASA和DEC的代理服务器数据为例,将该算法与LRU、LFU、SIZE、Hybrid算法进行了仿真实验对比,结果证明,ORB算  相似文献   

13.
Proxy cache algorithms: design, implementation, and performance   总被引:4,自引:0,他引:4  
Caching at proxy servers is one of the ways to reduce the response time perceived by World Wide Web users. Cache replacement algorithms play a central role in the response time reduction by selecting a subset of documents for caching, so that a given performance metric is maximized. At the same time, the cache must take extra steps to guarantee some form of consistency of the cached documents. Cache consistency algorithms enforce appropriate guarantees about the staleness of the cached documents. We describe a unified cache maintenance algorithm, LNC-R-WS-U, which integrates both cache replacement and consistency algorithms. The LNC-R-WS-U algorithm evicts documents from the cache based on the delay to fetch each document into the cache. Consequently, the documents that took a long time to fetch are preferentially kept in the cache. The LNC-R-W3-U algorithm also considers in the eviction consideration the validation rate of each document, as provided by the cache consistency component of LNC-R-WS-U. Consequently, documents that are infrequently updated and thus seldom require validations are preferentially retained in the cache. We describe the implementation of LNC-R-W3-U and its integration with the Apache 1.2.6 code base. Finally, we present a trace-driven experimental study of LNC-R-W3-U performance and its comparison with other previously published algorithms for cache maintenance  相似文献   

14.
Web caching proxy servers are essential for improving web performance and scalability, and recent research has focused on making proxy caching work for database-backed web sites. In this paper, we explore a new proxy caching framework that exploits the query semantics of HTML forms. We identify two common classes of form-based queries from real-world database-backed web sites, namely, keyword-based queries and function-embedded queries. Using typical examples of these queries, we study two representative caching schemes within our framework: (i) traditional passive query caching, and (ii) active query caching, in which the proxy cache can service a request by evaluating a query over the contents of the cache. Results from our experimental implementation show that our form-based proxy is a general and flexible approach that efficiently enables active caching schemes for database-backed web sites. Furthermore, handling query containment at the proxy yields significant performance advantages over passive query caching, but extending the power of the active cache to do full semantic caching appears to be less generally effective.  相似文献   

15.
代理Web Cache性能分析   总被引:3,自引:0,他引:3  
采用WebCache技术提高当前Internet性能已成为一个主流的研究领域,其功能原理就象处理器和文件系统中的多级高速缓存一样。大规模Web高速缓存系统已成为许多国家Internet基础设施的重要组成部分。该文从三个不同访问规模的代理WebCache的跟踪日志出发,分析了WebCache的用户访问模式、Cache命中率、Cache服务器处理延迟等统计特征,提出基于分布式共享RAM和外存储结合的两级协同WebCache集群技术,可以提供可扩展的高性能并行Web高速缓存服务。  相似文献   

16.
Computing global illumination in complex scenes is even with todays computational power a demanding task. In this work we propose a novel irradiance caching scheme that combines the advantages of two state-of-the-art algorithms for high-quality global illumination rendering: lightcuts , an adaptive and hierarchical instant-radiosity based algorithm and the widely used (ir)radiance caching algorithm for sparse sampling and interpolation of (ir)radiance in object space. Our adaptive radiance caching algorithm is based on anisotropic cache splatting, which adapts the cache footprints not only to the magnitude of the illumination gradient computed with light-cuts but also to its orientation allowing larger interpolation errors along the direction of coherent illumination while reducing the error along the illumination gradient. Since lightcuts computes the direct and indirect lighting seamlessly, we use a two-layer radiance cache, to store and control the interpolation of direct and indirect lighting individually with different error criteria. In multiple iterations our method detects cache interpolation errors above the visibility threshold of a pixel and reduces the anisotropic cache footprints accordingly. We achieve significantly better image quality while also speeding up the computation costs by one to two orders of magnitude with respect to the well-known photon mapping with (ir)radiance caching procedure.  相似文献   

17.
Modern Internet routers require powerful forwarding facilities to cope with extremely high rate Forwarding Information Base (FIB) lookups. In general, the FIB is constrained to a small highly efficient but expensive memory. Unfortunately, the BGP route table (RIB) keeps increasing, and this subsequently results in severe FIB inflation at BGP routers. What if we only load a small portion of the RIB into the FIB? Recently the route caching mechanism has been revisited. With such a route caching mechanism, the optimal method is to load in a FIB with popular prefixes which contribute major traffic loads. We propose a prediction based method to catch those popular prefixes with a limited cache size. In this paper, the dynamics of popular prefixes has been studied based on real traffic traces from different ISPs. On applying a GM(1,1) model which is widely applied in grey system control and prediction, we propose a traffic prediction-based route caching method which attempts to bias the cache dump strategy with a range of history to ameliorate the effects of bursts from non-popular prefixes. We also suggest applying FIB aggregation techniques, e.g. Optimal Routing Table Constructor (ORTC) algorithm, to suppress the number of non-popular sub-prefixes of the popular prefixes on route updates. The evaluation of our method is based on simulation over real traffic traces. The simulation shows our prediction-based cache replacement strategy outperforms other cache strategies and matches Internet traffic dynamics very well.  相似文献   

18.
Web网站缓存设计中Cache一致性问题的研究   总被引:4,自引:0,他引:4  
从网站缓存实现机制入手,分析了Cache一致性对网站缓存效率的影响,重点讨论了一种通过服务器集群实现基于网站为单位的缓存设计,缓存不再针对具体的文件而是以整个网站为单位来进行查询和替换,更好地保证了Cache一致性原则在缓存设计中的体现。  相似文献   

19.
基于J2EE平台集群服务的分布式缓存队列模型   总被引:1,自引:0,他引:1  
周敬利  李福寿  余胜生 《计算机工程》2005,31(4):100-101,191
集群缓存足提高J2FE(Java 2 Entcrprisc Edition)应用程序可扩展性能的关键技术,但目前J2EE提供的集群技术对于集群缓存的性能处理还存在一定的不足,该文在分析、12EE集群缓存技术的应用基础上,提出一种分布式缓存队列模型,可以有效地解决集群中的可靠性、可扩展性和失败转发等问题。实验表明,采用分布式缓存队列模型能提高集群的访问效率,是一种可行的解决方案。  相似文献   

20.
移动查询缓存处理的研究   总被引:5,自引:0,他引:5  
客户缓存为提高客户/服务器数据库系统整体性能以及客户方数据可用性提供了有效途径。移动环境下网络资源的贫乏使客户缓存的作用更为重要,语义缓存是基于客户查询语义相关建立的一类缓存,提出一个基于语义缓存的客户缓存机制,给出缓存的内容组织,提出缓存项合并策略;然后讨论了基于语义缓存的查询处理策略;最后,模拟结果表明该客户缓存机制能够提高分布式、特别是移动环境下客户服务器数据库系统的性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号