首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The article is a review of the book: Web Caching and Its Applications, written by S.V. Nagaraj and published by Springer, 2004. Web caching technology improves client download times and reduces network traffic by caching frequently accessed copies of Web objects close to the clients. The primary research issues in Web caching are where to cache copies of objects (cache placement), how to keep the cached copies consistent (cache consistency), and how to redirect clients to the optimal cache server (client redirection). Web caching systems' design space is huge, and building a good caching system involves several issues. Over the past decade, researchers have carried out a tremendous amount of work in addressing these issues. In Web Caching and Its Applications, S.V. Nagaraj aims to provide a bird's eye view of this research. He has exhaustively surveyed the literature and summarized the results of several research publications. The author concludes that the book can serve as a reference tool for researchers and for graduate students working on Web systems. However, its approach isn't suitable for Web administrators or students who are new to the field.  相似文献   

2.
Proxy caching for media streaming over the Internet   总被引:7,自引:0,他引:7  
Streaming media has contributed to a significant amount of today's Internet traffic. Like conventional Web objects (e.g., HTML pages and images), media objects can benefit from proxy caching; but their unique features such as huge size and high bandwidth demand imply that conventional proxy caching strategies have to be substantially revised. This article discusses the critical issues and challenges of cache management for proxy-assisted media streaming. We survey, classify, and compare the state-of-the-art solutions. We also investigate advanced issues of combining multicast with caching, cooperating among proxies, and leveraging proxy caching in overlay networks.  相似文献   

3.
Wireless mesh networks (WMNs) have been proposed to provide cheap, easily deployable and robust Internet access. The dominant Internet-access traffic from clients causes a congestion bottleneck around the gateway, which can significantly limit the throughput of the WMN clients in accessing the Internet. In this paper, we present MeshCache, a transparent caching system for WMNs that exploits the locality in client Internet-access traffic to mitigate the bottleneck effect at the gateway, thereby improving client-perceived performance. MeshCache leverages the fact that a WMN typically spans a small geographic area and hence mesh routers are easily over-provisioned with CPU, memory, and disk storage, and extends the individual wireless mesh routers in a WMN with built-in content caching functionality. It then performs cooperative caching among the wireless mesh routers.We explore two architecture designs for MeshCache: (1) caching at every client access mesh router upon file download, and (2) caching at each mesh router along the route the Internet-access traffic travels, which requires breaking a single end-to-end transport connection into multiple single-hop transport connections along the route. We also leverage the abundant research results from cooperative web caching in the Internet in designing cache selection protocols for efficiently locating caches containing data objects for these two architectures. We further compare these two MeshCache designs with caching at the gateway router only.Through extensive simulations and evaluations using a prototype implementation on a testbed, we find that MeshCache can significantly improve the performance of client nodes in WMNs. In particular, our experiments with a Squid-based MeshCache implementation deployed on the MAP mesh network testbed with 15 routers show that compared to caching at the gateway only, the MeshCache architecture with hop-by-hop caching reduces the load at the gateway by 38%, improves the average client throughput by 170%, and increases the number of transfers that achieve a throughput greater than 1 Mbps by a factor of 3.  相似文献   

4.
Web caching has been widely used to alleviate Internet traffic congestion in World Wide Web (WWW) services. To reduce download throughput, an effective strategy on web cache management is needed to exploit web usage information in order to make a decision on evicting the document stored in case of cache saturation. This paper presents a so-called Learning Based Replacement algorithm (LBR), a hybrid approach towards an efficient replacement model for web caching by incorporating a machine learning technique (naive Bayes) into the LRU replacement method to improve prediction of possibility that an existing page will be revised by a succeeding request, from access history in a web log. The learned knowledge includes information on which URL objects in cache should be kept or evicted. The learning-based model is acquired to represent the hidden aspect of user request pattern for predicting the re-reference possibility. By a number of experiments, the LBR gains potential improvement of prediction on revisit probability, hit rate and byte hit rate overtraditional methods; LRU, LFU, and GDSF, respectively.  相似文献   

5.
In this paper we discuss the performance of a document distribution model that interconnects Web caches through a satellite channel. During recent years Web caching has emerged as an important way to reduce client-perceived latency and network resource requirements in the Internet. Also a satellite distribution is being rapidly deployed to offer Internet services while avoiding highly congested terrestrial links. When Web caches are interconnected through a satellite distribution, caches end up containing all documents requested by a huge community of clients. Having a large community of clients connected to a cache, the probability that a client is the first one to request a document is very small, and the number of requests that are hit in the cache increases. In this paper we develop analytical models to study the performance of a cache-satellite distribution. We derive simple expressions for the hit rate of the caches, the bandwidth in the satellite channel, the latency experienced by the clients, and the required capacity of the caches. Additionally, we use trace driven simulations to validate our model and evaluate the performance of a real cache-satellite distribution.  相似文献   

6.
Scalable proxy caching of video under storage constraints   总被引:10,自引:0,他引:10  
Proxy caching has been used to speed up Web browsing and reduce networking costs. In this paper, we study the extension of proxy caching techniques to streaming video applications. A trivial extension consists of storing complete video sequences in the cache. However, this may not be applicable in situations where the video objects are very large and proxy cache space is limited. We show that the approaches proposed in this paper (referred to as selective caching), where only a few frames are cached, can also contribute to significant improvements in the overall performance. In particular, we discuss two network environments for streaming video, namely, quality-of-service (QoS) networks and best-effort networks (Internet). For QoS networks, the video caching goal is to reduce the network bandwidth costs; for best-effort networks, the goal is to increase the robustness of continuous playback against poor network conditions (such as congestion, delay, and loss). Two different selective caching algorithms (SCQ and SCB) are proposed, one for each network scenario, to increase the relevant overall performance metric in each case, while requiring only a fraction of the video stream to be cached. The main contribution of our work is to provide algorithms that are efficient even when the buffer memory available at the client is limited. These algorithms are also scalable so that when changes in the environment occur it is possible, with low complexity, to modify the allocation of cache space to different video sequences.  相似文献   

7.
Internet service providers(ISPs) have taken some measures to reduce intolerable inter-ISP peer-to-peer(P2P) traffic costs,therefore user experiences of various P2P applications have been affected.The recently emerging offline downloading service seeks to improve user experience by using dedicate servers to cache requested files and provide high-speed uploading.However,with rapid increase in user population,the server-side bandwidth resource of offline downloading system is expected to be insufficient in the near future.We propose a novel complementary caching scheme with the goal of mitigating inter-ISP traffic,alleviating the load on servers of Internet applications and enhancing user experience.Both architecture and caching algorithm are presented in this paper.On the one hand,with full knowledge of P2P file sharing system and offline downloading service,the infrastructure of complementary caching is designed to conveniently be deployed and work together with existing platforms.The co-operational mechanisms among different major components are also included.On the other hand,with in-depth understanding of traffic characteristics that are relevant to caching,we develop complementary caching algorithm with respect to the density of requests,the redundancy of file and file size.Since such relevant information can be real-time captured in our design,the proposed policy can be implemented to guide the storage and replacement of caching unities.Based on real-world traces over 3 months,we demonstrate that the complementary caching scheme is capable to achieve the ’three-win’ objective.That is,for P2P downloading,over 50% of traffic is redirected to cache;for offline downloading,the average server-dependence of tasks drops from 0.71 to 0.32;for user experience,the average P2P transfer rate is increased by more than 50 KB/s.  相似文献   

8.
文章研究了基于TTL的Web缓存层次过滤效果,业务量性质对基于TTL的动态Web缓存系统的性能有重要影响。在层次缓存中,由于只有错失的请求才会被转发给下一级缓存,因而逐级对业务量存在过滤作用,业务量性质随之改变。文章利用仿真研究了基于TTL的动态Web缓存层次过滤对业务量的影响。重点考察了请求到达间隔模型及对象流行度分布的变化。  相似文献   

9.
DNS performance and the effectiveness of caching   总被引:3,自引:0,他引:3  
This paper presents a detailed analysis of traces of domain name system (DNS) and associated TCP traffic collected on the Internet links of the MIT Laboratory for Computer Science and the Korea Advanced Institute of Science and Technology (KAIST). The first part of the analysis details how clients at these institutions interact with the wide-area domain name system, focusing on client-perceived performance and the prevalence of failures and errors. The second part evaluates the effectiveness of DNS caching. In the most recent MIT trace, 23% of lookups receive no answer; these lookups account for more than half of all traced DNS packets since query packets are retransmitted overly persistently. About 13% of all lookups result in an answer that indicates an error condition. Many of these errors appear to be caused by missing inverse (IP-to-name) mappings or NS records that point to nonexistent or inappropriate hosts. 27% of the queries sent to the root name servers result in such errors. The paper also presents the results of trace-driven simulations that explore the effect of varying time-to-live (TTL) and varying degrees of cache sharing on DNS cache hit rates. Due to the heavy-tailed nature of name accesses, reducing the TTL of address (A) records to as low as a few hundred seconds has little adverse effect on hit rates, and little benefit is obtained from sharing a forwarding DNS cache among more than 10 or 20 clients. These results suggest that client latency is not as dependent on aggressive caching as is commonly believed, and that the widespread use of dynamic low-TTL A-record bindings should not greatly increase DNS related wide-area network traffic.  相似文献   

10.
ICP and the Squid web cache   总被引:11,自引:0,他引:11  
We describe the structure and functionality of the Internet cache protocol (ICP) and its implementation in the Squid web caching software. ICP is a lightweight message format used for communication among Web caches. Caches exchange ICP queries and replies to gather information to use in selecting the most appropriate location from which to retrieve an object. We present background on the history of ICP, and discuss issues in ICP deployment, efficiency, security, and interaction with other aspects of Web traffic behavior. We catalog successes, failures, and lessons learned from using ICP to deploy a global Web cache hierarchy  相似文献   

11.
This paper aims at finding fundamental design principles for hierarchical Web caching. An analytical modeling technique is developed to characterize an uncooperative two-level hierarchical caching system where the least recently used (LRU) algorithm is locally run at each cache. With this modeling technique, we are able to identify a characteristic time for each cache, which plays a fundamental role in understanding the caching processes. In particular, a cache can be viewed roughly as a low-pass filter with its cutoff frequency equal to the inverse of the characteristic time. Documents with access frequencies lower than this cutoff frequency have good chances to pass through the cache without cache hits. This viewpoint enables us to take any branch of the cache tree as a tandem of low-pass filters at different cutoff frequencies, which further results in the finding of two fundamental design principles. Finally, to demonstrate how to use the principles to guide the caching algorithm design, we propose a cooperative hierarchical Web caching architecture based on these principles. Both model-based and real trace simulation studies show that the proposed cooperative architecture results in more than 50% memory saving and substantial central processing unit (CPU) power saving for the management and update of cache entries compared with the traditional uncooperative hierarchical caching architecture.  相似文献   

12.
In this paper, we investigate an incentive edge caching mechanism for an internet of vehicles (IoV) system based on the paradigm of software‐defined networking (SDN). We start by proposing a distributed SDN‐based IoV architecture. Then, based on this architecture, we focus on the economic side of caching by considering competitive cache‐enablers market composed of one content provider (CP) and multiple mobile network operators (MNOs). Each MNO manages a set of cache‐enabled small base stations (SBS). The CP incites the MNOs to store its popular contents in cache‐enabled SBSs with highest access probability to enhance the satisfaction of its users. By leasing their cache‐enabled SBSs, the MNOs aim to make more monetary profit. We formulate the interaction between the CP and the MNOs, using a Stackelberg game, where the CP acts first as the leader by announcing the popular content quantity that it which to cache and fixing the caching popularity threshold, a minimum access probability under it a content cannot be cached. Then, MNOs act subsequently as followers responding by the content quantity they accept to cache and the corresponding caching price. A noncooperative subgame is formulated to model the competition between the followers on the CP's limited content quantity. We analyze the leader and the follower's optimization problems, and we prove the Stackelberg equilibrium (SE). Simulation results show that our game‐based incentive caching model achieves optimal utilities and outperforms other incentive caching mechanisms with monopoly cache‐enablers whilst enhancing 30% of the user's satisfaction and reducing the caching cost.  相似文献   

13.
Over‐the‐top (OTT) services such as Netflix, Amazon Prime, and YouTube generate the most dominant form of traffic on the Internet today. There is increasingly high demand for resource intensive 3D contents, interactive media, 360 media, and user‐generated contents. As the amount of contents keep increasing in multiple folds, it is important to cache contents intelligently. Caching algorithm needs to exploit in‐network caching, community‐based pre‐caching, and a combined approach. Hence, we survey CDN‐based edge caching infrastructures including OpenConnect (Netflix) and Google Edge, followed by CCN based in‐network caching. We implement and compare four different approaches for caching contents including (1) in‐network caching, (2) edge caching, (3) community‐based in‐network caching, and (4) community‐based edge caching. We run our algorithms on adaptive network conditions with different topologies, cache size, content popularity, and request arrivals in and compared the delay for all these four approaches. We verify our model by calculating important performance parameters including hop count, redundancy, and hop count variances. Hopcount is an important performance parameter as it influences the processing, queuing, and transmission delays. We focus on determining if an in‐network caching approach is any better than edge caching. We reach several conclusions. First, in most of the scenarios, community‐based in‐network caching performs the best. Second, if the cache size is lesser than 30% of the total content size then community‐based edge caching is better for less popular contents. Finally, our statistical analysis also reveals that a community‐based edge caching mechanism is least affected by varying cache sizes and dynamic user behavior, which makes it a better choice for providing Service Level Agreement.  相似文献   

14.
The development of proxy caching is essential in the area of video‐on‐demand (VoD) to meet users' expectations. VoD requires high bandwidth and creates high traffic due to the nature of media. Many researchers have developed proxy caching models to reduce bandwidth consumption and traffic. Proxy caching keeps part of a media object to meet the viewing expectations of users without delay and provides interactive playback. If the caching is done continuously, the entire cache space will be exhausted at one stage. Hence, the proxy server must apply cache replacement policies to replace existing objects and allocate the cache space for the incoming objects. Researchers have developed many cache replacement policies by considering several parameters, such as recency, access frequency, cost of retrieval, and size of the object. In this paper, the Weighted‐Rank Cache replacement Policy (WRCP) is proposed. This policy uses such parameters as access frequency, aging, and mean access gap ratio and such functions as size and cost of retrieval. The WRCP applies our previously developed proxy caching model, Hot‐Point Proxy, at four levels of replacement, depending on the cache requirement. Simulation results show that the WRCP outperforms our earlier model, the Dual Cache Replacement Policy.  相似文献   

15.
Network processing in the current Internet is at the entirety of the data packet, which is problematic when encountering network congestion. The newly proposed Internet service named Qualitative Communication changes the network processing paradigm to an even finer granularity, namely chunk level, which obsoletes many existing networking policies and schemes, especially the caching algorithms and cache replacement policies that have been extensively explored in Web Caching, Content Delivery Networks (CDN) or Information-Centric Networks (ICN). This paper outlines all the new factors that are brought by random linear network coding-based Qualitative Communication and proves the importance and necessity of considering them. A novel metric is proposed by taking these new factors into consideration. An optimization problem is formulated to maximize the metric value of all retained chunks in the local storage of network nodes under the constraint of storage limit. A cache replacement scheme that obtains the optimal result in a recursive manner is proposed correspondingly. With the help of the introduced intelligent cache replacement algorithm, the performance evaluations show remarkably reduced end-to-end latency compared to the existing schemes in various network scenarios.  相似文献   

16.
Cooperative Caching Strategy in Mobile Ad Hoc Networks Based on Clusters   总被引:1,自引:0,他引:1  
In this paper, we present a scheme, called Cluster Cooperative (CC) for caching in mobile ad hoc networks. In CC scheme, the network topology is partitioned into non-overlapping clusters based on the physical network proximity. For a local cache miss, each client looks for data item in the cluster. If no client inside the cluster has cached the requested item, the request is forwarded to the next client on the routing path towards server. A cache replacement policy, called Least Utility Value with Migration (LUV-Mi) is developed. The LUV-Mi policy is suitable for cooperation in clustered ad hoc environment because it considers the performance of an entire cluster along with the performance of local client. Simulation experiments show that CC caching mechanism achieves significant improvements in cache hit ratio and average query latency in comparison with other caching strategies.  相似文献   

17.
Information‐centric networking (ICN) has emerged as a promising candidate for designing content‐based future Internet paradigms. ICN increases the utilization of a network through location‐independent content naming and in‐network content caching. In routers, cache replacement policy determines which content to be replaced in the case of cache free space shortage. Thus, it has a direct influence on user experience, especially content delivery time. Meanwhile, content can be provided from different locations simultaneously because of the multi‐source property of the content in ICN. To the best of our knowledge, no work has yet studied the impact of cache replacement policy on the content delivery time considering multi‐source content delivery in ICN, an issue addressed in this paper. As our contribution, we analytically quantify the average content delivery time when different cache replacement policies, namely, least recently used (LRU) and random replacement (RR) policy, are employed. As an impressive result, we report the superiority of these policies in term of the popularity distribution of contents. The expected content delivery time in a supposed network topology was studied by both theoretical and experimental method. On the basis of the obtained results, some interesting findings of the performance of used cache replacement policies are provided.  相似文献   

18.
Traffic Modeling and Proportional Partial Caching for Peer-to-Peer Systems   总被引:2,自引:0,他引:2  
Peer-to-peer (P2P) file sharing systems generate a major portion of the Internet traffic, and this portion is expected to increase in the future. We explore the potential of deploying proxy caches in different Autonomous Systems (ASes) with the goal of reducing the cost incurred by Internet service providers and alleviating the load on the Internet backbone. We conduct an eight-month measurement study to analyze the P2P traffic characteristics that are relevant to caching, such as object popularity, popularity dynamics, and object size. Our study shows that the popularity of P2P objects can be modeled by a Mandelbrot–Zipf distribution, and that several workloads exist in P2P traffic. Guided by our findings, we develop a novel caching algorithm for P2P traffic that is based on object segmentation, and proportional partial admission and eviction of objects. Our trace-based simulations show that with a relatively small cache size, a byte hit rate of up to 35% can be achieved by our algorithm, which is close to the byte hit rate achieved by an off-line optimal algorithm with complete knowledge of future requests. Our results also show that our algorithm achieves a byte hit rate that is at least 40% more, and at most triple, the byte hit rate of the common web caching algorithms. Furthermore, our algorithm is robust in face of aborted downloads, which is a common case in P2P systems.   相似文献   

19.
Today's HTTP carries Web interactions over client-initiated TCP connections. An important implication of using this transport method is that interception caches in the network violate the end-to-end principle of the Internet, which severely limits deployment options of these caches. Furthermore, while an increasing number of Web interactions are short, and in fact frequently carry only control information and no data, TCP is often inefficient for short interactions We propose a new transfer protocol for the Web, called Dual-Transport HTTP (DHTTP), which splits the traffic between UDP and TCP channels. When choosing the TCP channel, it is the server who opens the connection back to the client. Through server-initiated connections, DHTTP upholds the Internet end-to-end principle in the presence of interception caches, thereby allowing unrestricted caching within backbones. Moreover, the comparative performance study of DHTTP and HTTP using trace-driven simulation as well as testing real HTTP and DHTTP servers showed a significant performance advantage of DHTTP when the bottleneck is at the server and comparable performance when the bottleneck is in the network.  相似文献   

20.
Due to the widening gap between the performance of microprocessors and that of memory, using caches in a system to take advantage of locality in its workload has become a standard approach to improve overall system performance. At the same time, many performance problems finally reduce to cache performance issues. Locality in system workload is the fact that makes caching possible. In this paper, we first use the reuse distance model to characterize temporal locality in Internet traffic. We develop a model that closely matches the empirical data. We then extend the work to investigate temporal locality in the workload of multi‐processor forwarding systems by comparing locality under different packet scheduling schemes. Our simulations show that for systems with hash‐based schedulers, caching can be an effective way to improve forwarding performance. Based on flow‐level traffic characteristics, we further discuss the relationship between load‐balancing and hash‐scheduling, which yields insights into system design. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号