首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We argue that cache consistency mechanisms designed for stand-alone proxies do not scale to the large number of proxies in a content distribution network and are not flexible enough to allow consistency guarantees to be tailored to object needs. To meet the twin challenges of scalability and flexibility, we introduce the notion of cooperative consistency along with a mechanism, called cooperative leases, to achieve it. By supporting /spl Delta/-consistency semantics and by using a single lease for multiple proxies, cooperative leases allow the notion of leases to be applied in a flexible, scalable manner to CDNs. Further, the approach employs application-level multicast to propagate server notifications to proxies in a scalable manner. We implement our approach in the Apache Web server and the Squid proxy cache and demonstrate its efficacy using a detailed experimental evaluation. Our results show a factor of 2.5 reduction in server message overhead and a 20 percent reduction in server state space overhead when compared to original leases albeit at an increased interproxy communication overhead.  相似文献   

2.
Volume leases for consistency in large-scale systems   总被引:2,自引:0,他引:2  
This article introduces volume leases as a mechanism for providing server-driven cache consistency for large-scale, geographically distributed networks. Volume leases retain the good performance, fault tolerance, and server scalability of the semantically weaker client-driven protocols that are now used on the Web. Volume leases are a variation of object leases, which were originally designed for distributed file systems. However, whereas traditional object leases amortize overheads over long lease periods, volume leases exploit spatial locality to amortize overheads across multiple objects in a volume. This approach allows systems to maintain good write performance even in the presence of failures. Using trace-driven simulation, we compare three volume lease algorithms against four existing cache consistency algorithms and show that our new algorithms provide strong consistency while maintaining scalability and fault-tolerance. For a trace-based workload of Web accesses, we find that volumes can reduce message traffic at servers by 40 percent compared to a standard lease algorithm, and that volumes can considerably reduce the peak load at servers when popular objects are modified  相似文献   

3.
There are many methods to maintain consistency in the distributed computing environment. Ideally, efficient schemes for maintaining consistency should take into account the following factors: lease duration of replicated data, data access pattern and system parameters. One method used to supply strong consistency in the web environment is the lease method. During the proxy’s lease time from a web server, the web server can notify the modification to the proxy by invalidation or update. In this paper, we analyze lease protocol performance by the varying update/invalidation scheme, lease duration and read rates. By using these analyses, we can choose the adaptive lease time and proper protocol (invalidation or update scheme of the modification for each proxy in the web environment). As the number of proxies for web caching increases exponentially, a more efficient method for maintaining consistency needs to be designed. We also present three-tier hierarchies on which each group and node independently and adaptively choose the proper lease time and protocol for each proxy cache. These classifications of the scheme make proxy caching adaptive to client access pattern and system parameters.  相似文献   

4.
Evaluation of Strong Consistency Web Caching Techniques   总被引:1,自引:0,他引:1  
Cao  L. Y.  Özsu  M. T. 《World Wide Web》2002,5(2):95-123
The growth of the World Wide Web (WWW or Web) and its increasing use in all types of business have created bottlenecks that lead to high network and server overload and, eventually, high client latency. Web Caching has become an important topic of research, in the hope that these problems can be addressed by appropriate caching techniques. Conventional wisdom holds that strong cache consistency, with (almost) transactional consistency guarantees, may neither be necessary for Web applications, nor suitable due to its high overhead. However, as business transactions on the Web become more popular, strong consistency will be increasingly necessary. Consequently, it is important to have a comprehensive understanding of the performance behavior of these protocols. The existing studies, unfortunately, are ad hoc and the results cannot be compared across different studies. In this paper we evaluate the performance of different categories of cache consistency algorithms using a standard benchmark: TPC-W, which is the Web commerce benchmark. Our experiments show that we could still enforce strong cache consistency without much overhead, and Invalidation, as an event-driven strong cache consistency algorithm, is most suitable for online e-business. We also evaluate the optimum deployment of caches and find that proxy-side cache has a 30–35% performance advantage over client-side cache with regard to system throughput.  相似文献   

5.
High-performance Web sites rely on Web server `farms', hundreds of computers serving the same content, for scalability, reliability, and low-latency access to Internet content. Deploying these scalable farms typically requires the power of distributed or clustered file systems. Building Web server farms on file systems complements hierarchical proxy caching. Proxy caching replicates Web content throughout the Internet, thereby reducing latency from network delays and off-loading traffic from the primary servers. Web server farms scale resources at a single site, reducing latency from queuing delays. Both technologies are essential when building a high-performance infrastructure for content delivery. The authors present a cache consistency model and locking protocol customized for file systems that are used as scalable infrastructure for Web server farms. The protocol takes advantage of the Web's relaxed consistency semantics to reduce latencies and network overhead. Our hybrid approach preserves strong consistency for concurrent write sharing with time-based consistency and push caching for readers (Web servers). Using simulation, we compare our approach to the Andrew file system and the sequential consistency file system protocols we propose to replace  相似文献   

6.
Effective caching in the domain name system (DNS) is critical to its performance and scalability. Existing DNS only supports weak cache consistency by using the time-to-live (TTL) mechanism, which functions reasonably well in normal situations. However, maintaining strong cache consistency in DNS as an indispensable exceptional handling mechanism has become more and more demanding for three important objectives: 1) to quickly respond and handle exceptions such as sudden and dramatic Internet failures caused by natural and human disasters, 2) to adapt increasingly frequent changes of Internet Protocol (IP) addresses due to the introduction of dynamic DNS techniques for various stationed and mobile devices on the Internet, and 3) to provide fine-grain controls for content delivery services to timely balance server load distributions. With agile adaptation to various exceptional Internet dynamics, strong DNS cache consistency improves the availability and reliability of Internet services. In this paper, we first conduct extensive Internet measurements to quantitatively characterize DNS dynamics. Then, we propose a proactive DNS cache update protocol (DNScup), running as middleware in DNS name servers, to provide strong cache consistency for DNS. The core of DNScup is an optimal lease scheme, called dynamic lease, to keep track of the local DNS name servers. We compare dynamic lease with other existing lease schemes through theoretical analysis and trace-driven simulations. Based on the DNS dynamic update protocol, we build a DNScup prototype with minor modifications to the current DNS implementation. Our system prototype demonstrates the effectiveness of DNScup and its easy and incremental deployment on the Internet.  相似文献   

7.
A new Web cache sharing scheme is presented. Our scheme reduces the duplicated copies of the same objects in global shared Web caches. It also reduces the message overhead of existing schemes significantly. Trace-driven simulations with actual Web cache logs show that the proposed scheme performs better than the two well-known Web cache sharing schemes, the Internet Cache Protocol and the Cache Array Routing Protocol.  相似文献   

8.
In this paper, we address the problem of cache replacement for transcoding proxy caching. Transcoding proxy is a proxy that has the functionality of transcoding a multimedia object into an appropriate format or resolution for each client. We first propose an effective cache replacement algorithm for transcoding proxy. In general, when a new object is to be cached, cache replacement algorithms evict some of the cached objects with the least profit to accommodate the new object. Our algorithm takes into account of the inter-relationships among different versions of the same multimedia object, and selects the versions to replace according to their aggregate profit which usually differs from simple summation of their individual profits as assumed in the existing algorithms. It also considers cache consistency, which is not considered in the existing algorithms. We then present a complexity analysis to show the efficiency of our algorithm. Finally, we give extensive simulation results to compare the performance of our algorithm with some existing algorithms. The results show that our algorithm outperforms others in terms of various performance metrics.  相似文献   

9.
《Computer Networks》1999,31(11-16):1725-1736
The World-Wide Web provides remote access to pages using its own naming scheme (URLs), transfer protocol (HTTP), and cache algorithms. Not only does using these special-purpose mechanisms have performance implications, but they make it impossible for standard Unix applications to access the Web. Gecko is a system that provides access to the Web via the NFS protocol. URLs are mapped to Unix file names, providing unmodified applications access to Web pages; pages are transferred from the Gecko server to the clients using NFS instead of HTTP, significantly improving performance; and NFS's cache consistency mechanism ensures that all clients have the same version of a page. Applications access pages as they would Unix files. A client-side proxy translates HTTP requests into file accesses, allowing existing Web applications to use Gecko. Experiments performed on our prototype show that Gecko is able to provide this additional functionality at a performance level that exceeds that of HTTP.  相似文献   

10.
In the literature, there exit two types of cache consistency maintenance algorithms for mobile computing environments: stateless and stateful. In a stateless approach, the server is unaware of the cache contents at a mobile user (MU). Even though stateless approaches employ simple database management schemes, they lack scalability and ability to support user disconnectedness and mobility. On the other hand, a stateful approach is scalable for large database systems at the cost of nontrivial overhead due to server database management. We propose a novel algorithm, called Scalable Asynchronous Cache Consistency Scheme (SACCS), which inherits the positive features of both stateless and stateful approaches. SACCS provides a weak cache consistency for unreliable communication (e.g., wireless mobile) environments with small stale cache hit probability. It is also a highly scalable algorithm with minimum database management overhead. The properties are accomplished through the use of flag bits at the server cache (SC) and MU cache (MUC), an identifier (ID) in MUC for each entry after its invalidation, and estimated time-to-live (TTL) for each cached entry, as well as rendering of all valid entries of MUC to uncertain state when an MU wakes up. The stale cache hit probability is analyzed and also simulated under the Rayleigh fading model of error-prone wireless channels. Comprehensive simulation results show that the performance of SACCS is superior to those of other existing stateful and stateless algorithms in both single and multicell mobile environments.  相似文献   

11.
Dynamic Web applications have gained a great deal of popularity. Improving the performance of these applications has recently attracted the attention of many researchers. One of the most important techniques proposed for this purpose is caching, which can be done at different locations and within different stages of the process of generating a dynamic Web page. Most of the caching schemes proposed in literature are lenient about the issue of consistency; they assume that users can tolerate receiving stale data. However, an important class of dynamic Web applications are those in which users always expect to get the freshest data available. Any caching scheme has to incur a significant overhead to be able to provide this level of consistency (i.e., strong consistency); the overhead may be so much that it neutralizes the benefits of caching. In this paper, three alternative architectures are investigated for dynamic Web applications that require strong consistency. A proxy caching scheme is designed and implemented, which performs caching at the level of database queries. This caching system is used in one of the alternative architectures. The performance experiments show that, despite the high overhead of providing strong consistency in database caching, this technique can improve the performance of dynamic Web applications, especially when there is a long network latency between clients and the (origin) server.  相似文献   

12.
Cover3     
In recent years, edge computing has emerged as a popular mechanism to deliver dynamic Web content to clients. However, many existing edge cache networks have not been able to harness the full potential of edge computing technology. In this paper, we argue and experimentally demonstrate that cooperation among the individual edge caches coupled with scalable server-driven document consistency mechanisms can significantly enhance the capabilities and performance of edge cache networks in delivering fresh dynamic content. However, designing large-scale cooperative edge cache networks presents many research challenges. Toward addressing these challenges, this paper presents cooperative edge cache grid (cooperative EC grid, for short) - a large-scale cooperative edge cache network for efficiently delivering highly dynamic Web content with varying server update frequencies. The design of the cooperative EC grid focuses on the scalability and reliability of dynamic content delivery in addition to cache hit rates, and it incorporates several novel features. We introduce the concept of cache clouds as a generic framework of cooperation in large-scale edge cache networks. The architectural design of the cache clouds includes dynamic hashing-based document lookup and update protocols, which dynamically balance lookup and update loads among the caches in the cloud. We also present cooperative techniques for making the document lookup and update protocols resilient to the failures of individual caches. This paper reports a series of simulation-based experiments which show that the overheads of cooperation in the cooperative EC grid are very low, and our architecture and techniques enhance the performance of the cooperative edge networks  相似文献   

13.
Performance evaluation of Web proxy cache replacement policies   总被引:10,自引:0,他引:10  
Martin  Rich  Tai 《Performance Evaluation》2000,39(1-4):149-164
The continued growth of the World-Wide Web and the emergence of new end-user technologies such as cable modems necessitate the use of proxy caches to reduce latency, network traffic and Web server loads. In this paper we analyze the importance of different Web proxy workload characteristics in making good cache replacement decisions. We evaluate workload characteristics such as object size, recency of reference, frequency of reference, and turnover in the active set of objects. Trace-driven simulation is used to evaluate the effectiveness of various replacement policies for Web proxy caches. The extended duration of the trace (117 million requests collected over 5 months) allows long term side effects of replacement policies to be identified and quantified.

Our results indicate that higher cache hit rates are achieved using size-based replacement policies. These policies store a large number of small objects in the cache, thus increasing the probability of an object being in the cache when requested. To achieve higher byte hit rates a few larger files must be retained in the cache. We found frequency-based policies to work best for this metric, as they keep the most popular files, regardless of size, in the cache. With either approach it is important that inactive objects be removed from the cache to prevent performance degradation due to pollution.  相似文献   


14.
Proxy caches are essential to improve the performance of the World Wide Web and to enhance user perceived latency. Appropriate cache management strategies are crucial to achieve these goals. In our previous work, we have introduced Web object-based caching policies. A Web object consists of the main HTML page and all of its constituent embedded files. Our studies have shown that these policies improve proxy cache performance substantially.In this paper, we propose a new Web object-based policy to manage the storage system of a proxy cache. We propose two techniques to improve the storage system performance. The first technique is concerned with prefetching the related files belonging to a Web object, from the disk to main memory. This prefetching improves performance as most of the files can be provided from the main memory rather than from the proxy disk. The second technique stores the Web object members in contiguous disk blocks in order to reduce the disk access time. We used trace-driven simulations to study the performance improvements one can obtain with these two techniques. Our results show that the first technique by itself provides up to 50% reduction in hit latency, which is the delay involved in providing a hit document by the proxy. An additional 5% improvement can be obtained by incorporating the second technique.  相似文献   

15.
The sharing of caches among proxies is an important technique to reduce Web traffic, alleviate network bottlenecks, and improve response time of document requests. Most existing work on cooperative caching has been focused on serving misses collaboratively. Very few have studied the effect of cooperation on document placement schemes and its potential enhancements on cache hit ratio and latency reduction. We propose a new document placement scheme which takes into account the contentions at individual caches in order to limit the replication of documents within a cache group and increase document hit ratio. The main idea of this new scheme is to view the aggregate disk space of the cache group as a global resource of the group and uses the concept of cache expiration age to measure the contention of individual caches. The decision of whether to cache a document at a proxy is made collectively among the caches that already have a copy of this document. We refer to this new document placement scheme as the Expiration Age-based scheme (EA scheme). The EA scheme effectively reduces the replication of documents across the cache group, while ensuring that a copy of the document always resides in a cache where it is likely to stay for the longest time. We report our study on the potentials and limits of the EA scheme using both analytic modeling and trace-based simulation. The analytical model compares and contrasts the existing (ad hoc) placement scheme of cooperative proxy caches with our new EA scheme and indicates that the EA scheme improves the effectiveness of aggregate disk usage, thereby increasing the average time duration for which documents stay in the cache. The trace-based simulations show that the EA scheme yields higher hit rates and better response times compared to the existing document placement schemes used in most of the caching proxies.  相似文献   

16.
Web对象缓存技术是一种减少web服务器访问通信量和访问延迟的重要手段。Web缓存的引入虽然大大减轻了服务器负载,降低了网络拥塞,减少了客户端访问的延迟等优点,但同时也带来缓存的一致性问题,这样使客户端获得web的数据可能不是最新的版本。该文通过分析现有的缓存一致性方针,提出了一个应适于web的强缓存一致性算法。  相似文献   

17.
Songqing Chen  Xiaodong Zhang 《Software》2004,34(14):1381-1395
The amount of dynamic Web contents and secured e‐commerce transactions has been dramatically increasing on the Internet, where proxy servers between clients and Web servers are commonly used for the purpose of sharing commonly accessed data and reducing Internet traffic. A significant and unnecessary Web access delay is caused by the overhead in proxy servers to process two types of accesses, namely dynamic Web contents and secured transactions, not only increasing response time, but also raising some security concerns. Conducting experiments on Squid proxy 2.3STABLE4, we have quantified the unnecessary processing overhead to show its significant impact on increased client access response times. We have also analyzed the technical difficulties in eliminating or reducing the processing overhead and the security loopholes based on the existing proxy structure. In order to address these performance and security concerns, we propose a simple but effective technique from the client side that adds a detector interfacing with a browser. With this detector, a standard browser, such as the Netscape/Mozilla, will have simple detective and scheduling functions, called a detective browser. Upon an Internet request from a user, the detective browser can immediately determine whether the requested content is dynamic or secured. If so, the browser will bypass the proxy and forward the request directly to the Web server; otherwise, the request will be processed through the proxy. We implemented a detective browser prototype in Mozilla version 0.9.7, and tested its functionality and effectiveness. Since we have simply moved the necessary detective functions from a proxy server to a browser, the detective browser introduces little overhead to Internet accessing, and our software can be patched to existing browsers easily. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

18.
Communication overhead is the key obstacle to reaching hardware performance limits. The majority is associated with software overhead, a significant portion of which is attributed to message copying. To reduce this copying overhead, we have devised techniques that do not require to copy a received message in order for it to be bound to its final destination. Rather, a late-binding mechanism, which involves address translation and a dedicated cache, facilitates fast access to received messages by the consuming process/thread.We have introduced two policies namely Direct to Cache Transfer (DTCT) and lazy DTCT that determine whether a message after it is bound needs to be transferred into the data cache. We have studied the proposed methods in simulation and have shown their effectiveness in reducing access times to message payloads by the consuming process.  相似文献   

19.
Mobile computing allows users to request critical information and receive swift responses at any places, but mobile users could suffer from unreliable and ill-timed services due to the characteristics of wireless media. One way that reduces possibility of the unsatisfactory services is data replication. Data replication, however, inevitably induces the overhead of maintaining replica consistency which requires more expensive synchronization mechanism. We propose a new replicated data management scheme in distributed mobile environment. In order to alleviate negative impact of synchronization message overhead in fault-prone mobile environment, we devise a new replication control scheme called proxy quorum consensus (PQC). PQC minimizes the message overhead by coordinating quorum access activities by means of proxy mediated voting (PMV) which exploits reliable proxy hosts instead of unreliable mobile hosts in voting process. We also propose a simulation model to show the performance of PQC. Based on the results of the performance evaluation, we conclude that PQC scheme outperforms the traditional schemes.  相似文献   

20.
Proxy cache algorithms: design, implementation, and performance   总被引:4,自引:0,他引:4  
Caching at proxy servers is one of the ways to reduce the response time perceived by World Wide Web users. Cache replacement algorithms play a central role in the response time reduction by selecting a subset of documents for caching, so that a given performance metric is maximized. At the same time, the cache must take extra steps to guarantee some form of consistency of the cached documents. Cache consistency algorithms enforce appropriate guarantees about the staleness of the cached documents. We describe a unified cache maintenance algorithm, LNC-R-WS-U, which integrates both cache replacement and consistency algorithms. The LNC-R-WS-U algorithm evicts documents from the cache based on the delay to fetch each document into the cache. Consequently, the documents that took a long time to fetch are preferentially kept in the cache. The LNC-R-W3-U algorithm also considers in the eviction consideration the validation rate of each document, as provided by the cache consistency component of LNC-R-WS-U. Consequently, documents that are infrequently updated and thus seldom require validations are preferentially retained in the cache. We describe the implementation of LNC-R-W3-U and its integration with the Apache 1.2.6 code base. Finally, we present a trace-driven experimental study of LNC-R-W3-U performance and its comparison with other previously published algorithms for cache maintenance  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号