首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
In this paper we discuss the performance of a document distribution model that interconnects Web caches through a satellite channel. During recent years Web caching has emerged as an important way to reduce client-perceived latency and network resource requirements in the Internet. Also a satellite distribution is being rapidly deployed to offer Internet services while avoiding highly congested terrestrial links. When Web caches are interconnected through a satellite distribution, caches end up containing all documents requested by a huge community of clients. Having a large community of clients connected to a cache, the probability that a client is the first one to request a document is very small, and the number of requests that are hit in the cache increases. In this paper we develop analytical models to study the performance of a cache-satellite distribution. We derive simple expressions for the hit rate of the caches, the bandwidth in the satellite channel, the latency experienced by the clients, and the required capacity of the caches. Additionally, we use trace driven simulations to validate our model and evaluate the performance of a real cache-satellite distribution.  相似文献   

2.
Data caching can significantly improve the efficiency of information access in a wireless ad hoc network by reducing the access latency and bandwidth usage. However, designing efficient distributed caching algorithms is nontrivial when network nodes have limited memory. In this article, we consider the cache placement problem of minimizing total data access cost in ad hoc networks with multiple data items and nodes with limited memory capacity. The above optimization problem is known to be NP-hard. Defining benefit as the reduction in total access cost, we present a polynomial-time centralized approximation algorithm that provably delivers a solution whose benefit is at least 1/4 (1/2 for uniform-size data items) of the optimal benefit. The approximation algorithm is amenable to localized distributed implementation, which is shown via simulations to perform close to the approximation algorithm. Our distributed algorithm naturally extends to networks with mobile nodes. We simulate our distributed algorithm using a network simulator (ns2) and demonstrate that it significantly outperforms another existing caching technique (by Yin and Cao [33]) in all important performance metrics. The performance differential is particularly large in more challenging scenarios such as higher access frequency and smaller memory.  相似文献   

3.
Network caching of objects has become a standard way of reducing network traffic and latency in the web. However, web caches exhibit poor performance with a hit rate of about 30%. A solution to improve this hit rate is to have a group of proxies form co‐operation where objects can be cached for later retrieval. A co‐operative cache system includes protocols for hierarchical and transversal caching. The drawback of such a system lies in the resulting network load due to the number of messages that need to be exchanged to locate an object. This paper proposes a new co‐operative web caching architecture, which unifies previous methods of web caching. Performance results shows that the architecture achieve up to 70% co‐operative hit rate and accesses the cached object in at most two hops. Moreover, the architecture is scalable with low traffic and database overhead. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

4.
《IEEE network》1997,11(6):37-44
Shared Web caches, also referred to as proxy Web servers, allow multiple clients to quickly access a pool of popular Web pages. An organization that provides shared caching to its Web clients will typically have a collection of shared caches rather than just one. For collections of shared caches, it is desirable to coordinate the caches so that all cached pages in the collection are shared among the organization's clients. In this article we investigate two classes of protocols for coordinating a collection of shared caches: the ICP protocol, which has caches ping each other to locate a cached object; and the hash routing protocols, which place objects in the shared caches as a function of the objects' URLs. Our contribution is twofold. First, we compare the performance of the protocols with respect to cache-server overhead and object retrieval latency; for a collection of shared caches, our analysis shows that the hash-routing schemes have significant performance advantages over ICP for both of the performance metrics. The existing hash-routing protocols assume that the cache servers are homogeneous in storage capacity and processing capability, even though most collections of cache servers are vastly heterogeneous. Our second contribution is to extend a robust hash-routing scheme so that it balances requests among the caches according to any desired distribution; the extended hash-routing scheme is robust in the face of cache failures, is tunable for heterogeneous caches, and can have significant performance advantages over ICP  相似文献   

5.
With exponential increase in the number of users and available data, service providers are facing hard times to satisfy and improve end user experience. Researchers have come up with the idea of exploiting increasing number of routers in a network, and it leads to the development of information-centric networking (ICN). Efficient usage of the in-network caches and content forwarding methodology are the key issues in an ICN architecture. ICN reduces average hop count and correspondingly average content download delay because the intra-domain routers in ICN have storage capacity and they can act as temporary content provider. In this paper, we address the content management issue in a cache with finite storage capability and propose an efficient content management policy that changes a router to a self-sustained cache. We propose a novel methodology to process content packets in the buffer of a cache and correspondingly reduce the propagation delay through a cache. We simulate our proposed algorithm over real-life network environment and evaluate the performance of different user experience metrics, e.g. average latency, throughput, goodput, and link load. Simulation results suggest that our proposed model outperforms the existing state-of-the-art on-path caching strategies.  相似文献   

6.
The explosive growth of mobile data traffic has made cellular operators to seek low‐cost alternatives for cellular traffic off‐loading. In this paper, we consider a content delivery network where a vehicular communication network composed of roadside units (RSUs) is integrated into a cellular network to serve as an off‐loading platform. Each RSU subjecting to its storage capacity caches a subset of the contents of the central content server. Allocating the suitable subset of contents in each RSU cache such that maximizes the hit ratio of vehicles requests is a problem of paramount value that is targeted in this study. First, we propose a centralized solution in which, we model the cache content placement problem as a submodular maximization problem and show that it is NP‐hard. Second, we propose a distributed cooperative caching scheme, in which RSUs in an area periodically share information about their contents locally and thus update their cache. To this end, we model the distributed caching problem as a strategic resource allocation game that achieves at least 50% of the optimal solution. Finally, we evaluate our scheme using simulation for urban mobility simulator under realistic conditions. On average, the results show an improvement of 8% in the hit ratio of the proposed method compared with other well‐known cache content placement approaches.  相似文献   

7.
The MIT Alewife Machine   总被引:3,自引:0,他引:3  
A variety of models for parallel architectures, such as shared memory, message passing, and data flow, have converged in the recent past to a hybrid architecture form called distributed shared memory (DSM). Alewife, an early prototype of such DSM architectures, uses hybrid software and hardware mechanisms to support coherent shared memory, efficient user level messaging, fine grain synchronization, and latency tolerance. Alewife supports up to 512 processing nodes connected over a scalable and cost effective mesh network at a constant cost per node. Four mechanisms combine to achieve Alewife's goals of scalability and programmability: software extended coherent shared memory provides a global, linear address space; integrated message passing allows compiler and operating system designers to provide efficient communication and synchronization; support for fine grain computation allows many processors to cooperate on small problem sizes; and latency tolerance mechanisms-including block multithreading and prefetching-mask unavoidable delays due to communication. Extensive results from microbenchmarks, together with over a dozen complete applications running on a 32-node prototype, demonstrate that integrating message passing with shared memory enables a cost efficient solution to the cache coherence problem and provides a rich set of programming primitives. Our results further show that messaging and shared memory operations are both important because each helps the programmer to achieve the best performance for various machine configurations  相似文献   

8.
Efficient Cache Placement in Multi-Hop Wireless Networks   总被引:1,自引:0,他引:1  
In this paper, we address the problem of efficient cache placement in multi-hop wireless networks. We consider a network comprising a server with an interface to the wired network, and other nodes requiring access to the information stored at the server. In order to reduce access latency in such a communication environment, an effective strategy is caching the server information at some of the nodes distributed across the network. Caching, however, can imply a considerable overhead cost; for instance, disseminating information incurs additional energy as well as bandwidth burden. Since wireless systems are plagued by scarcity of available energy and bandwidth, we need to design caching strategies that optimally trade-off between overhead cost and access latency. We pose our problem as an integer linear program. We show that this problem is the same as a special case of the connected facility location problem, which is known to be NP-hard. We devise a polynomial time algorithm which provides a suboptimal solution. The proposed algorithm applies to any arbitrary network topology and can be implemented in a distributed and asynchronous manner. In the case of a tree topology, our algorithm gives the optimal solution. In the case of an arbitrary topology, it finds a feasible solution with an objective function value within a factor of 6 of the optimal value. This performance is very close to the best approximate solution known today, which is obtained in a centralized manner. We compare the performance of our algorithm against three candidate cache placement schemes, and show via extensive simulation that our algorithm consistently outperforms these alternative schemes.  相似文献   

9.
在信息中心网络(Information-Centric Network, ICN)中,利用网络内置缓存提高内容获取及传输效率是该网络构架最重要的特性。然而,网络内置的缓存存在应对大量的需要转发的内容时能力相对弱小,对内容放置缺乏均衡分布的问题。该文提出基于内容流行度和节点中心度匹配的缓存策略(Popularity and Centrality Based Caching Scheme, PCBCS),通过对经过的内容进行选择性缓存来提高内容分发沿路节点的缓存空间使用效率,减少缓存冗余。仿真结果表明,该文提出的算法和全局沿路缓存决策方案,LCD(Leave Copy Down)以及参数为0.7及0.3的Prob(copy with Probability)相比较,在服务器命中率上平均减少30%,在命中缓存内容所需的跳数上平均减少20%,最重要的是,和全局沿路缓存决策方案相比总体缓存替换数量平均减少了40%。  相似文献   

10.
11.
Wireless mesh networks (WMNs) have been proposed to provide cheap, easily deployable and robust Internet access. The dominant Internet-access traffic from clients causes a congestion bottleneck around the gateway, which can significantly limit the throughput of the WMN clients in accessing the Internet. In this paper, we present MeshCache, a transparent caching system for WMNs that exploits the locality in client Internet-access traffic to mitigate the bottleneck effect at the gateway, thereby improving client-perceived performance. MeshCache leverages the fact that a WMN typically spans a small geographic area and hence mesh routers are easily over-provisioned with CPU, memory, and disk storage, and extends the individual wireless mesh routers in a WMN with built-in content caching functionality. It then performs cooperative caching among the wireless mesh routers.We explore two architecture designs for MeshCache: (1) caching at every client access mesh router upon file download, and (2) caching at each mesh router along the route the Internet-access traffic travels, which requires breaking a single end-to-end transport connection into multiple single-hop transport connections along the route. We also leverage the abundant research results from cooperative web caching in the Internet in designing cache selection protocols for efficiently locating caches containing data objects for these two architectures. We further compare these two MeshCache designs with caching at the gateway router only.Through extensive simulations and evaluations using a prototype implementation on a testbed, we find that MeshCache can significantly improve the performance of client nodes in WMNs. In particular, our experiments with a Squid-based MeshCache implementation deployed on the MAP mesh network testbed with 15 routers show that compared to caching at the gateway only, the MeshCache architecture with hop-by-hop caching reduces the load at the gateway by 38%, improves the average client throughput by 170%, and increases the number of transfers that achieve a throughput greater than 1 Mbps by a factor of 3.  相似文献   

12.
文章研究了基于TTL的Web缓存层次过滤效果,业务量性质对基于TTL的动态Web缓存系统的性能有重要影响。在层次缓存中,由于只有错失的请求才会被转发给下一级缓存,因而逐级对业务量存在过滤作用,业务量性质随之改变。文章利用仿真研究了基于TTL的动态Web缓存层次过滤对业务量的影响。重点考察了请求到达间隔模型及对象流行度分布的变化。  相似文献   

13.
Cooperating proxy caches are groups of HTTP proxy servers that organize to share cached objects. This paper develops analytical models for proxy cooperation which use speedup in user response time as the performance metric. Speedup expressions are derived for the cooperation upper bound, a proxy mesh, and a three-level proxy hierarchy. The equations compare fundamental design approaches by separating the proxy organization for object delivery from the mechanism for object discovery. Discovery mechanisms analyzed for the mesh and hierarchy models include ideal discovery, Internet cache protocol (ICP) query, and distributed metadata directories. Equations are evaluated using parameter estimates from experiments and from analysis of cache trace logs. Results indicate that proxy cooperation is marginally viable from the standpoint of average user response time, and that the miss penalty for the hierarchy renders it less viable than the mesh. Proxy cooperation can, however, reduce the variability in user response time and the number of long delays. A trace-driven simulation shows that caching constraints have little effect on cooperation performance due to request filtering by lower level caches.  相似文献   

14.
This paper aims at finding fundamental design principles for hierarchical Web caching. An analytical modeling technique is developed to characterize an uncooperative two-level hierarchical caching system where the least recently used (LRU) algorithm is locally run at each cache. With this modeling technique, we are able to identify a characteristic time for each cache, which plays a fundamental role in understanding the caching processes. In particular, a cache can be viewed roughly as a low-pass filter with its cutoff frequency equal to the inverse of the characteristic time. Documents with access frequencies lower than this cutoff frequency have good chances to pass through the cache without cache hits. This viewpoint enables us to take any branch of the cache tree as a tandem of low-pass filters at different cutoff frequencies, which further results in the finding of two fundamental design principles. Finally, to demonstrate how to use the principles to guide the caching algorithm design, we propose a cooperative hierarchical Web caching architecture based on these principles. Both model-based and real trace simulation studies show that the proposed cooperative architecture results in more than 50% memory saving and substantial central processing unit (CPU) power saving for the management and update of cache entries compared with the traditional uncooperative hierarchical caching architecture.  相似文献   

15.
Throughput performance of wireless networks can be enhanced by applying network coding (NC) technique based on opportunistic listening. The packets sent or overheard by a network node should be locally cached for the purpose of possible future decoding. How to manage the cache to reduce the overhead incurred in performing NC and, meanwhile, exploit performance gain is an interesting issue that has not been deeply investigated. In this paper, we present a framework for packet caching policy in multihop wireless networks, aiming at improving decoding efficiency, and thus throughput gain of NC. We formulate the caching policy design as an optimization problem for maximizing decoding utility and derive a set of optimization rules. We propose a distributed network coding caching policy (NCP), which can be readily incorporated into various existing NC architectures to improve NC performance gain. We theoretically analyze the performance improvement of NCP over completely opportunistic NC (COPE). In addition, we use simulation experiments based on ns‐2 to evaluate the performance of NCP. Numerical results validate our analytical model and show that NCP can effectively improve the performance gain of NC compared with COPE. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
Due to the widening gap between the performance of microprocessors and that of memory, using caches in a system to take advantage of locality in its workload has become a standard approach to improve overall system performance. At the same time, many performance problems finally reduce to cache performance issues. Locality in system workload is the fact that makes caching possible. In this paper, we first use the reuse distance model to characterize temporal locality in Internet traffic. We develop a model that closely matches the empirical data. We then extend the work to investigate temporal locality in the workload of multi‐processor forwarding systems by comparing locality under different packet scheduling schemes. Our simulations show that for systems with hash‐based schedulers, caching can be an effective way to improve forwarding performance. Based on flow‐level traffic characteristics, we further discuss the relationship between load‐balancing and hash‐scheduling, which yields insights into system design. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

17.
With advances in process technology, soft errors are becoming an increasingly critical design concern. Owing to their large area, high density, and low operating voltages, caches are worst hit by soft errors. Based on the observation that in multimedia applications, not all data require the same amount of protection from soft errors, we propose a partially protected cache (PPC) architecture, in which there are two caches, one protected and the other unprotected at the same level of memory hierarchy. We demonstrate that as compared to the existing unprotected cache architectures, PPC architectures can provide 47 times reduction in failure rate, at only 1% runtime and 3% power overheads. In addition, the failure rate reduction obtained by PPCs is very sensitive to the PPC cache configuration. Therefore, this observation provides an opportunity for further improvement of the solution by correctly parameterizing the PPC configurations. Consequently, we develop design space exploration (DSE) strategies to discover the best PPC configuration. Our DSE technique can reduce the exploration time by more than six times as compared to an exhaustive approach.   相似文献   

18.
Static energy reduction techniques for microprocessor caches   总被引:1,自引:0,他引:1  
Microprocessor performance has been improved by increasing the capacity of on-chip caches. However, the performance gain comes at the price of static energy consumption due to subthreshold leakage current in cache memory arrays. This paper compares three techniques for reducing static energy consumption in on-chip level-1 and level-2 caches. One technique employs low-leakage transistors in the memory cell. Another technique, power supply switching, can be used to turn off memory cells and discard their contents. A third alternative is dynamic threshold modulation, which places memory cells in a standby state that preserves cell contents. In our experiments, we explore the energy and performance tradeoffs of these techniques. We also investigate the sensitivity of microprocessor performance and energy consumption to additional cache latency caused by leakage-reduction techniques.  相似文献   

19.
On-demand routing protocols use route caches to make routing decisions. Due to mobility, cached routes easily become stale. To address the cache staleness issue, prior work in DSR used heuristics with ad hoc parameters to predict the lifetime of a link or a route. However, heuristics cannot accurately estimate timeouts because topology changes are unpredictable. In this paper, we propose proactively disseminating the broken link information to the nodes that have that link in their caches. We define a new cache structure called a cache table and present a distributed cache update algorithm. Each node maintains in its cache table the information necessary for cache updates. When a link failure is detected, the algorithm notifies all reachable nodes that have cached the link in a distributed manner. The algorithm does not use any ad hoc parameters, thus making route caches fully adaptive to topology changes. We show that the algorithm outperforms DSR with path caches and with Link-MaxLife, an adaptive timeout mechanism for link caches. We conclude that proactive cache updating is key to the adaptation of on-demand routing protocols to mobility.  相似文献   

20.
Supporting cooperative caching in ad hoc networks   总被引:6,自引:0,他引:6  
Most researches in ad hoc networks focus on routing and not much work has been done on data access. A common technique used to improve the performance of data access is caching. Cooperative caching, which allows the sharing and coordination of cached data among multiple nodes, can further explore the potential of the caching techniques. Due to mobility and resource constraints of ad hoc networks, cooperative caching techniques designed for wired networks may not be applicable to ad hoc networks. In this paper, we design and evaluate cooperative caching techniques to efficiently support data access in ad hoc networks. We first propose two schemes: CacheData, which caches the data, and CachePath, which caches the data path. After analyzing the performance of those two schemes, we propose a hybrid approach (HybridCache), which can further improve the performance by taking advantage of CacheData and CachePath while avoiding their weaknesses. Cache replacement policies are also studied to further improve the performance. Simulation results show that the proposed schemes can significantly reduce the query delay and message complexity when compared to other caching schemes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号