首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Cooperating proxy caches are groups of HTTP proxy servers that organize to share cached objects. This paper develops analytical models for proxy cooperation which use speedup in user response time as the performance metric. Speedup expressions are derived for the cooperation upper bound, a proxy mesh, and a three-level proxy hierarchy. The equations compare fundamental design approaches by separating the proxy organization for object delivery from the mechanism for object discovery. Discovery mechanisms analyzed for the mesh and hierarchy models include ideal discovery, Internet cache protocol (ICP) query, and distributed metadata directories. Equations are evaluated using parameter estimates from experiments and from analysis of cache trace logs. Results indicate that proxy cooperation is marginally viable from the standpoint of average user response time, and that the miss penalty for the hierarchy renders it less viable than the mesh. Proxy cooperation can, however, reduce the variability in user response time and the number of long delays. A trace-driven simulation shows that caching constraints have little effect on cooperation performance due to request filtering by lower level caches.  相似文献   

2.
We present architectures and algorithms for efficiently serving dynamic data at highly accessed Web sites together with the results of an analysis motivating our design and quantifying its performance benefits. This includes algorithms for keeping cached data consistent so that dynamic pages can be cached at the Web server and dynamic content can be served at the performance level of static content. We show that our system design is able to achieve cache hit ratios close to 100% for cached data which is almost never obsolete by more than a few seconds, if at all. Our architectures and algorithms provide more than an order of magnitude improvement in performance using an order of magnitude fewer servers over that obtained under conventional methods.  相似文献   

3.
ICP and the Squid web cache   总被引:11,自引:0,他引:11  
We describe the structure and functionality of the Internet cache protocol (ICP) and its implementation in the Squid web caching software. ICP is a lightweight message format used for communication among Web caches. Caches exchange ICP queries and replies to gather information to use in selecting the most appropriate location from which to retrieve an object. We present background on the history of ICP, and discuss issues in ICP deployment, efficiency, security, and interaction with other aspects of Web traffic behavior. We catalog successes, failures, and lessons learned from using ICP to deploy a global Web cache hierarchy  相似文献   

4.
On-demand routing protocols use route caches to make routing decisions. Due to mobility, cached routes easily become stale. To address the cache staleness issue, prior work in DSR used heuristics with ad hoc parameters to predict the lifetime of a link or a route. However, heuristics cannot accurately estimate timeouts because topology changes are unpredictable. In this paper, we propose proactively disseminating the broken link information to the nodes that have that link in their caches. We define a new cache structure called a cache table and present a distributed cache update algorithm. Each node maintains in its cache table the information necessary for cache updates. When a link failure is detected, the algorithm notifies all reachable nodes that have cached the link in a distributed manner. The algorithm does not use any ad hoc parameters, thus making route caches fully adaptive to topology changes. We show that the algorithm outperforms DSR with path caches and with Link-MaxLife, an adaptive timeout mechanism for link caches. We conclude that proactive cache updating is key to the adaptation of on-demand routing protocols to mobility.  相似文献   

5.
In this paper we discuss the performance of a document distribution model that interconnects Web caches through a satellite channel. During recent years Web caching has emerged as an important way to reduce client-perceived latency and network resource requirements in the Internet. Also a satellite distribution is being rapidly deployed to offer Internet services while avoiding highly congested terrestrial links. When Web caches are interconnected through a satellite distribution, caches end up containing all documents requested by a huge community of clients. Having a large community of clients connected to a cache, the probability that a client is the first one to request a document is very small, and the number of requests that are hit in the cache increases. In this paper we develop analytical models to study the performance of a cache-satellite distribution. We derive simple expressions for the hit rate of the caches, the bandwidth in the satellite channel, the latency experienced by the clients, and the required capacity of the caches. Additionally, we use trace driven simulations to validate our model and evaluate the performance of a real cache-satellite distribution.  相似文献   

6.
Network caching of objects has become a standard way of reducing network traffic and latency in the web. However, web caches exhibit poor performance with a hit rate of about 30%. A solution to improve this hit rate is to have a group of proxies form co‐operation where objects can be cached for later retrieval. A co‐operative cache system includes protocols for hierarchical and transversal caching. The drawback of such a system lies in the resulting network load due to the number of messages that need to be exchanged to locate an object. This paper proposes a new co‐operative web caching architecture, which unifies previous methods of web caching. Performance results shows that the architecture achieve up to 70% co‐operative hit rate and accesses the cached object in at most two hops. Moreover, the architecture is scalable with low traffic and database overhead. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

7.
Summary cache: a scalable wide-area Web cache sharing protocol   总被引:8,自引:0,他引:8  
The sharing of caches among Web proxies is an important technique to reduce Web traffic and alleviate network bottlenecks. Nevertheless it is not widely deployed due to the overhead of existing protocols. In this paper we demonstrate the benefits of cache sharing, measure the overhead of the existing protocols, and propose a new protocol called “summary cache”. In this new protocol, each proxy keeps a summary of the cache directory of each participating proxy, and checks these summaries for potential hits before sending any queries. Two factors contribute to our protocol's low overhead: the summaries are updated only periodically, and the directory representations are very economical, as low as 8 bits per entry. Using trace-driven simulations and a prototype implementation, we show that, compared to existing protocols such as the Internet cache protocol (ICP), summary cache reduces the number of intercache protocol messages by a factor of 25 to 60, reduces the bandwidth consumption by over 50%, eliminates 30% to 95% of the protocol CPU overhead, all while maintaining almost the same cache hit ratios as ICP. Hence summary cache scales to a large number of proxies. (This paper is a revision of Fan et al. 1998; we add more data and analysis in this version.)  相似文献   

8.
Web caches have become an integral component contributing to the improvement of the performance observed by Web clients. Cache satellite distribution systems (CSDSs) have emerged as a technology for feeding the caches with the information clients are expected to request, ahead of time. In such a system, the participating proxies periodically report to a central station about requests received from their clients. The central station selects a collection of Web documents, which are "pushed" via a satellite broadcast to the participating proxies, so that upon a future local request for the documents, they will already reside in the local cache, and will not need to be fetched from the terrestrial network. In this paper, our aim is addressing the issues of how to operate the CSDS, how to design it, and how to estimate its effect. Questions of interest are: 1) what Web documents should be transmitted by the central station and 2) what is the benefit of adding a particular proxy into a CSDS? We offer a model for CSDS that accounts for the request streams addressed to the proxies and which captures the intricate interaction between the proxy caches. Unlike models that are based only on the access frequency of the various documents, this model captures both their frequency and their locality of reference. We provide an analysis that is based on the stochastic properties of the traffic streams that can be derived from HTTP logs, examine it on real traffic, and demonstrate its applicability in selecting a set of proxies into a CSDS.  相似文献   

9.
Internet security threats are continually evolving as hackers try to stay ahead of countermeasures. In the last two years, a growing number of hackers have shifted their focus from straight-on firewall assaults and virus-laden emails - although these threats are not entirely things of the past - to Web-based attacks that expose Web site visitors to spyware, phishing scams, viruses, trojans, and other malicious code. An especially insidious new threat to Web users is the existence of malicious content residing in cached Web pages on storage and caching servers, such as those used by leading search engine providers, Web 2.0 sites, and Internet service providers (ISPs). This paper discusses the step-by-step process describing the infection method using search engine caching servers. The three examples of the type of Web threat, based on a recent analysis of Web pages on public storage and caching servers of three popular search engine providers are presented  相似文献   

10.
Video on demand (VOD) is one of the key applications in the information era. A hinge factor to its widespread use is the huge bandwidth required to transmit digitized video to a large group of clients with widely varying requirements. This paper addresses issues of heterogeneous clients by proposing a program caching scheme called the partial video sequence (PVS) caching scheme. The PVS caching scheme decomposes video sequences into a number of parts by using a scalable video compression algorithm. Video parts are selected to be cached in local video servers based on the amount of bandwidth that would be demanded from the distribution network and central video server if it was only kept in the central video server. We also show that the PVS caching scheme is suitable for handling vastly varying client requirements  相似文献   

11.
The article is a review of the book: Web Caching and Its Applications, written by S.V. Nagaraj and published by Springer, 2004. Web caching technology improves client download times and reduces network traffic by caching frequently accessed copies of Web objects close to the clients. The primary research issues in Web caching are where to cache copies of objects (cache placement), how to keep the cached copies consistent (cache consistency), and how to redirect clients to the optimal cache server (client redirection). Web caching systems' design space is huge, and building a good caching system involves several issues. Over the past decade, researchers have carried out a tremendous amount of work in addressing these issues. In Web Caching and Its Applications, S.V. Nagaraj aims to provide a bird's eye view of this research. He has exhaustively surveyed the literature and summarized the results of several research publications. The author concludes that the book can serve as a reference tool for researchers and for graduate students working on Web systems. However, its approach isn't suitable for Web administrators or students who are new to the field.  相似文献   

12.
Mogul  J.C. 《IEEE network》2000,14(3):6-14
Computer system designers often use caches to solve performance problems. Caching in the World Wide Web has been both the subject of extensive research and the basis of a large and growing industry. Traditional Web caches store HTTP responses, in anticipation of a subsequent reference to the URL of a cached response. Unfortunately, experience with real Web users shows that there are limits to the performance of this simple caching model, because many responses are useful only once. Researchers have proposed a variety of more complex ways in which HTTP caches can exploit locality in real reference streams. This article surveys several techniques, and reports the results of trace-based studies of a proposal based on automatic recognition of duplicated content  相似文献   

13.
Performance benchmarking of wireless Web servers   总被引:1,自引:0,他引:1  
Guangwei  Kehinde  Carey   《Ad hoc Networks》2007,5(3):392-412
The advent of mobile computers and wireless networks enables the deployment of wireless Web servers and clients in short-lived ad hoc network environments, such as classroom area networks. The purpose of this paper is to benchmark the performance capabilities of wireless Web servers in such an environment. Network traffic measurements are conducted on an in-building IEEE 802.11b wireless ad hoc network, using a wireless-enabled Apache Web server, several wireless clients, and a wireless network traffic analyzer. The experiments focus on the HTTP transaction rate and end-to-end throughput achievable in such an ad hoc network environment, and the impacts of factors such as Web object size, number of clients, and persistent HTTP connections. The results show that the wireless network bottleneck manifests itself in several ways: inefficient HTTP performance, client-side packet losses, server-side packet losses, network thrashing, and unfairness among Web clients. Persistent HTTP connections offer up to 350% improvement in HTTP transaction rate and user-level throughput, while also improving fairness for mobile clients accessing content from a wireless Web server.  相似文献   

14.
该文提出了一种新的基于缓存窗口和段补丁预取的移动流媒体动态调度算法,采用代理缓存窗口自适应伸缩和分段缓存补丁块方案,实现了移动流媒体对象在代理服务器中缓存的数据量和其流行度成正比的原则。仿真结果表明,对于客户请求到达速率的变化,该算法比传统算法具有更好的适应性,在最大缓存空间相同的情况下,能显著减少通过补丁通道传输的补丁数据,从而降低了服务器和骨干网络带宽的使用,能快速缓存媒体对象到缓存窗口,同时减少了代理服务器的缓存平均占有量。  相似文献   

15.
We investigate online browsing of interrelated content, represented as a catalog of items of interest featuring graph dependencies. The content is served to clients via a system of decentralized proxy caches connected to cloud servers. A client selects the next item to browse from the list of recommended items, displayed on the currently browsed item׳s catalog page. A cache has a limited size to have every item selected by its browsing clients available for local access. Thus, the system pays a penalty, whenever a client selects an item that cannot be served directly from the proxy. Conversely, the system gains a reward, if a client selects an immediately available item. We aim to select the items to cache that maximize the profit earned by the system, for the given cache capacity. We design two linear-time optimization techniques for finding the desired items to cache. We enhance the operation of the system via two additional strategies. The first one dynamically tracks the items׳ selection probabilities for a client, as a function of its prior catalog access pattern and those of its community peers. The second one constructs dynamic overlays, on behalf of the clients, that are used to share the selected items directly among them. This augments the system׳s serving capacity and enhances the clients׳ browsing experience. We study the performance of the optimization techniques via numerical experiments. They exhibit efficiency gains over reference methods, by exploiting the content dependencies and correlated community-driven access patterns of the clients. We also report proxy bandwidth savings achieved by our overlay strategy over state-of-the-art methods, on content access patterns of clients with Facebook or Twitter ties.  相似文献   

16.
基于段流行度的移动流媒体代理服务器缓存算法   总被引:1,自引:0,他引:1  
提出了一种基于段流行度的移动流媒体代理服务器缓存算法P2CAS2M2(proxy caching algorithm based on segment popularity for mobile streaming media),根据移动流媒体对象段的流行度,实现了代理服务器缓存的接纳和替换,使移动流媒体对象在代理服务器中缓存的数据量和其流行度成正比,并且根据客户平均访问时间动态决定该对象缓存窗口大小。仿真结果表明,对于代理服务器缓存大小的变化,P2CAS2M2比A2LS(adaptive and lazy segmentation algorithm)具有更好的适应性,在缓存空间相同的情况下,能够得到更大的被缓存流媒体对象的平均数,更小的被延迟的初始请求率,降低了启动延时,而字节命中率接近甚至超过A2LS。  相似文献   

17.
Cache cooperation improves the performance of isolated caches, especially for caches with small cache populations. To make caches cooperate on a large scale and effectively increase the cache population, several caches are usually federated in caching architectures. We discuss and compare the performance of different caching architectures. In particular, we consider hierarchical and distributed caching. We derive analytical models to study important performance parameters of hierarchical and distributed caching, i.e., client's perceived latency, bandwidth usage, load in the caches, and disk space usage. Additionally, we consider a hybrid caching architecture that combines hierarchical caching with distributed caching at every level of a caching hierarchy. We evaluate the performance of a hybrid scheme and determine the optimal number of caches that should cooperate at each caching level to minimize client's retrieval latency  相似文献   

18.
Mobile computing is considered of major importance to the computing industry for the forthcoming years due to the progress in the wireless communications area. A proxy-based architecture for accelerating Web browsing in wireless customer premises networks is presented. Proxy caches, maintained in base stations, are constantly relocated to follow the roaming user. A cache management scheme is proposed, which involves the relocation of full caches to the most probable cells but also percentages of the caches to less likely neighbors. Relocation is performed according to the output of a user movement prediction algorithm based on a learning automaton. The simulation of the scheme shows considerable benefits for the end user.  相似文献   

19.
The ability of a Web service to provide low-latency access to its content is constrained by available network bandwidth. While providing differentiated quality of service (QoS) is typically enforced through network mechanisms, in this paper we introduce a robust mechanism for managing network resources using application-specific characteristics of Web services. We use transcoding to allow Web servers to customize the size of objects constituting a Web page, and hence the bandwidth consumed by that page, by dynamically varying the size of multimedia objects on a per-client basis. We leverage our earlier work on characterizing quality versus size tradeoffs in transcoding JPEG images to supply more information for determining the quality and size of the object to transmit. We evaluate the performance benefits of incorporating this information in a series of bandwidth management policies using realistic workloads and access scenarios to drive our system. The principal contribution of this paper is the demonstration that it is possible to use informed transcoding techniques to provide differentiated service and to dynamically allocate available bandwidth among different client classes, while delivering good quality of information content for all clients. We also show that it is possible to customize multimedia objects to the highly variable network conditions experienced by mobile clients in order to provide acceptable quality and latency depending on the networks used in accessing the service. We show that policies that aggressively transcode the larger images can produce images with quality factor values that closely follow the untranscoded base case while still saving as much as 150 kB. A transcoding policy that has knowledge of the characteristics of the link to the client can avoid as many as 40% of (unnecessary) transcodings  相似文献   

20.
With the diffusion of wireless connections to Internet, the number of complex operations carried out from mobile users is increasing. To cope with bandwidth limitations and with disconnections, data caching is the most used technique. However for complex operation like dynamic searching a better solution is to take advantage of the multichannel property offered by CDMA protocol. In this case, cached documents can be allocated on distinguished channels in a dynamic way to obtain a better utilization of the radio communication links. We study a particular caching strategy suitable to be integrated with a radio-channel policy. We consider a semantic caching for intranet queries (or intranet searching) that takes advantage of data semantics by caching query answers instead of pages in order to exploit similarities between different queries. In fact, in a WLAN scenario, Internet activity is frequently composed by intranet searching operations characterized by local queries that aim to explore documents stored in a neighbor of the home site. We study benefits from a channel allocation strategy applied to intranet searching with semantic caching. Simulation experiments are carried out by considering an indoor scenario model where mobile clients perform keyword-based queries answered by local Web servers running application we refer to as WISH (Wireless Intranet SearcHing), an intranet searching tool based on semantic caching. The results show a 12% improvement in radio channel usage for 20% of users that share cached documents.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号