首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 24 毫秒
1.
Typical network file system (NFS) clients write lazily: they leave dirty pages in the page cache and defer writing to the server. This reduces network traffic when applications repeatedly modify the same set of pages. However, this approach can lead to memory pressure, when the number of available pages on the client system is so low that the system must work harder to reclaim dirty pages. We show that NFS performance is poor under memory pressure and present two mechanisms to solve it: eager writeback and eager page laundering. These mechanisms change the client's data management policy from lazy to eager, in which dirty pages are written back proactively, resulting in higher throughput for sequential writes. In addition, we show that NFS servers suffer from out-of-order file operations, which further reduce performance. We introduce request ordering, a server mechanism to process operations, as much as possible, in the order they were sent by the client, which improves read performance substantially. We have implemented these techniques in the Linux operating system. I/O performance is improved, with the most pronounced improvement visible for sequential access to large files. We see 33% improvement in the performance of streaming write workloads and more than triple the performance of streaming read workloads. We evaluate several non-sequential workloads and show that these techniques do not degrade performance, and can sometimes improve performance.  相似文献   

2.
集中管理式Web缓存系统及性能分析   总被引:5,自引:0,他引:5  
共享缓存文件是减少网络通信量和服务器负载的重要方法,本文在介绍Web Caching技术及流行的Web缓存通信协议ICP的基础上,提出了一种集中管理式Web缓存系统,该系统通过将用户的HTTP请求,按照一定的算法分发到系统中某一合适的缓存服务器上,从而消除了缓存系统内部服务器之间庞大的通信开销及缓存处理负担,减少了缓存内容的冗余度.通过分析,证明了集中管理式Web缓存系统比基于ICP的简单缓存系统具有缓存效率高、处理开销低、延迟小等优点,并且该系统具有良好的可扩展性.  相似文献   

3.
Web对象缓存技术是一种减少web服务器访问通信量和访问延迟的重要手段。Web缓存的引入虽然大大减轻了服务器负载,降低了网络拥塞,减少了客户端访问的延迟等优点,但同时也带来缓存的一致性问题,这样使客户端获得web的数据可能不是最新的版本。该文通过分析现有的缓存一致性方针,提出了一个应适于web的强缓存一致性算法。  相似文献   

4.
网络缓存管理是一种降低Internet流量和提高终端用户响应时间的网络技术。它来自于计算机和网络的其他领域,如目前流行的Intel架构的CPU中就存在缓存,用于提高内存存取的速度;各种操作系统在进行磁盘存取时也会利用缓存来提高速度;分布式文件系统通常也通过缓存来提高客户机和服务器之间的速度。无线网络数据的缓存可以在客户端,也可以在网络上,该文基于此,对周期性网络数据传输过程中的缓存管理技术进行了初步研究。  相似文献   

5.
数据挖掘技术在Web预取中的应用研究   总被引:69,自引:0,他引:69  
WWW以其多媒体的传输及良好的交互性而倍受青睐,虽然近几年来网络速度得到了很大的提高,但是由于接入Internet的用户数量剧增以及Web服务和网络固有的延迟,使得网络越来越拥护,用户的服务质量得不到很好的保证。为此文中提出了一种智能Web预取技术,它能够加快用户浏览Web页面时获取页面的速度。该技术通过简化的WWW数据模型表示用户浏览器缓冲器中的数据,在此基础上利用数据挖掘技术挖掘用户的兴趣关联规则,存放在兴趣关联知识库中,作为对用户行为进行预测的依据。在用户端,智能代理负责用户兴趣的挖掘及基于兴趣关联知识库的Web预取,从而对用户实现透明的浏览器加速。  相似文献   

6.
High-performance Web sites rely on Web server `farms', hundreds of computers serving the same content, for scalability, reliability, and low-latency access to Internet content. Deploying these scalable farms typically requires the power of distributed or clustered file systems. Building Web server farms on file systems complements hierarchical proxy caching. Proxy caching replicates Web content throughout the Internet, thereby reducing latency from network delays and off-loading traffic from the primary servers. Web server farms scale resources at a single site, reducing latency from queuing delays. Both technologies are essential when building a high-performance infrastructure for content delivery. The authors present a cache consistency model and locking protocol customized for file systems that are used as scalable infrastructure for Web server farms. The protocol takes advantage of the Web's relaxed consistency semantics to reduce latencies and network overhead. Our hybrid approach preserves strong consistency for concurrent write sharing with time-based consistency and push caching for readers (Web servers). Using simulation, we compare our approach to the Andrew file system and the sequential consistency file system protocols we propose to replace  相似文献   

7.
在GPU中,一个warp内的所有线程在锁步中执行相同的指令。某些线程的内存请求可以得到快速处理,而其余请求会经历较长时间。在最慢的请求完成之前,warp不能执行下一条指令,导致内存发散。对GPU中warp间的异构性进行了研究,实现并优化了一种基于inter warp异构性的缓存管理机制和内存调度策略,以减少内存发散和缓存排队延迟的负面影响。根据缓存命中率将warp分类,以驱动后面的3个组件:(1)基于warp类型的缓存旁路技术组件,使低缓存利用率的warp进入旁路,不访问L2缓存;(2)基于warp类型的缓存插入/提升策略组件,防止来自高缓存利用率warp的数据被过早清除;(3)基于warp类型的内存控制器组件,优先处理从高缓存利用率的warp接收到的请求,并优先处理来自相同warp的请求。基于warp间异构性的缓存管理和内存调度机制在8种不同的GPGPU应用中,与基准GPU相比,平均加速18.0%。  相似文献   

8.
Exploiting Regularities in Web Traffic Patterns for Cache Replacement   总被引:2,自引:0,他引:2  
Cohen  Kaplan 《Algorithmica》2002,33(3):300-334
Abstract. Caching web pages at proxies and in web servers' memories can greatly enhance performance. Proxy caching is known to reduce network load and both proxy and server caching can significantly decrease latency. Web caching problems have different properties than traditional operating systems caching, and cache replacement can benefit by recognizing and exploiting these differences. We address two aspects of the predictability of traffic patterns: the overall load experienced by large proxy and web servers, and the distinct access patterns of individual pages. We formalize the notion of ``cache load' under various replacement policies, including LRU and LFU, and demonstrate that the trace of a large proxy server exhibits regular load. Predictable load allows for improved design, analysis, and experimental evaluation of replacement policies. We provide a simple and (near) optimal replacement policy when each page request has an associated distribution function on the next request time of the page. Without the predictable load assumption, no such online policy is possible and it is known that even obtaining an offline optimum is hard. For experiments, predictable load enables comparing and evaluating cache replacement policies using partial traces , containing requests made to only a subset of the pages. Our results are based on considering a simpler caching model which we call the interval caching model . We relate traditional and interval caching policies under predictable load, and derive (near)-optimal replacement policies from their optimal interval caching counterparts.  相似文献   

9.
With the exponential growth of WWW traffic, web proxy caching becomes a critical technique for Internet web services. Well-organized proxy caching systems with multiple servers can greatly reduce the user perceived latency and decrease the network bandwidth consumption. Thus, many research papers focused on improving web caching performance with the efficient coordination algorithms among multiple servers. Hash based algorithm is the most widely used server coordination mechanism, however, there's still a lot of technical issues need to be addressed. In this paper, we propose a new hash based web caching architecture, Tulip. Tulip aggregates web objects that are likely to be accessed together into object clusters and uses object clusters as the primary access units. Tulip extends the locality-based algorithm in UCFS to hash based web proxy systems and proposes a simple algorithm to reduce the data grouping overhead. It takes into consideration the access speed dispatch between memory and disk and replaces expensive small disk I/O with less large ones. In case a client request cannot be fulfilled by the server in the memory, the system fetches the whole cluster which contains the required object into memory, the future requests for other objects in the same cluster can be satisfied directly from memory and slow disk I/Os are avoided. It also introduces a simple and efficient data dupllication algorithm, few maintenance work need to be done in case of server join/leave or server failure. Along with the local caching strategy, Tulip achieves better fault tolerance and load balance capability with the minimal cost. Our simulation results show Tulip has better performance than previous approaches.  相似文献   

10.
Peer-to-Peer (P2P) file sharing accounts for a very significant part of the Internet’s traffic, affecting the performance of other applications and translating into significant peering costs for ISPs. It has been noticed that, just like WWW traffic, P2P file sharing traffic shows locality properties, which are not exploited by current P2P file sharing protocols.We propose a peer selection algorithm, Adaptive Search Radius (ASR), where peers exploit locality by only downloading from those other peers which are nearest (in network hops). ASR ensures swarm robustness by dynamically adapting the distance according to file part availability. ASR aims at reducing the Internet’s P2P file sharing traffic, while decreasing the download times perceived by users, providing them with an incentive to adopt this algorithm. We believe ASR to be the first locality-aware P2P file sharing system that does not require assistance from ISPs or third parties nor modification to the server infrastructure.We support our proposal with extensive simulation studies, using the eDonkey/eMule protocol on SSFNet. These show a 19 to 29% decrease in download time and a 27 to 70% reduction in the traffic carried by tier-1 ISPs. ASR is also compared (favourably) with Biased Neighbour Selection (BNS), and traffic shaping. We conclude that ASR and BNS are complementary solutions which provide the highest performance when combined. We evaluated the impact of P2P file sharing traffic on HTTP traffic, showing the benefits on HTTP performance of reducing P2P traffic.A plan for introducing ASR into eMule clients is also discussed. This will allow a progressive migration to ASR enabled versions of eMule client software.ASR was also successfully used to download from live Internet swarms, providing significant traffic savings while finishing downloads faster.  相似文献   

11.
Cache技术在万维网上的应用   总被引:5,自引:0,他引:5  
虽然Cache技术是一种传统的技术,但是在应用到网络新环境时,就需要在原有的技术上进行改进。文中详细地介绍整个网络中各个节点(用户端、代理端、服务器端)对用户事件的网络行为,说明在WWW上使用Cache技术的原因和介绍了如何在网络的不同节点上应用该技术的问题,并比较了现有的主要的有关网络缓存的各种不同协议。还介绍了这一技术中两个重要的问题:缓存替换策略和如何使缓存中的动态对象和原始的Web服务器中相应的对象保持一致。  相似文献   

12.
Iyer  Ravi 《World Wide Web》2004,7(3):259-280
As Internet usage continues to expand rapidly, careful attention needs to be paid to the design of Internet servers for achieving high performance and end-user satisfaction. Currently, the memory system continues to remain a significant performance bottleneck for Internet servers employing multi-GHz processors. In this paper, our aim is two-fold: (1) to characterize the cache/memory performance of web server workloads and (2) to propose and evaluate cache design alternatives for future web servers. We chose SPECweb99 as the representative web server workload and our entire characterization and evaluation methodology is based on our CASPER simulation framework. We begin by exploring the processor cache design space for single and dual-processor servers. Based on our observations, we then evaluate other cache hierarchy alternatives such as chipset caches, coherence filters and decompressed page stores. We show the sensitivity of these components to basic organization parameters such as cache size, line size and degree of associativity. We also present the performance implications of routing memory requests initiated by I/O devices through these caches. Based on detailed simulation data and its implications on system level performance, this paper shows that chipset caches have significant potential for improving future web server performance.  相似文献   

13.
A number of technology and workload trends motivate us to consider the appropriate resource allocation mechanisms and policies for streaming media services in shared cluster environments. We present MediaGuard – a model-based infrastructure for building streaming media services – that can efficiently determine the fraction of server resources required to support a particular client request over its expected lifetime. The proposed solution is based on a unified cost function that uses a single value to reflect overall resource requirements such as the CPU, disk, memory, and bandwidth necessary to support a particular media stream based on its bit rate and whether it is likely to be served from memory or disk. We design a novel, time-segment-based memory model of a media server to efficiently determine in linear time whether a request will incur memory or disk access when given the history of previous accesses and the behavior of the server's main memory file buffer cache. Using the MediaGuard framework, we design two media services: (1) an efficient and accurate admission control service for streaming media servers that accounts for the impact of the server's main memory file buffer cache, and (2) a shared streaming media hosting service that can efficiently allocate the predefined shares of server resources to the hosted media services, while providing performance isolation and QoS guarantees among the hosted services. Our evaluation shows that, relative to a pessimistic admission control policy that assumes that all content must be served from disk, MediaGuard (as well as services that are built using it) deliver a factor of two improvement in server throughput.  相似文献   

14.
随着存储系统的访问速度与处理器运算速度的差距越来越显著,访存性能已成为提高处理器性能的瓶颈.通过对程序的访存行为进行分析,提出快速地址计算的自适应栈高速缓存方案.该方案将栈访问从数据高速缓存的访问中分离出来,充分利用栈空间数据访问的特点,提高指令级并行度,减少数据高速缓存污染,降低数据高速缓存失效率,并采用快速地址计算策略,减少栈访问的命中时间.该栈高速缓存在发生栈溢出时能够自适应地关闭,以避免栈切换对处理器性能的影响.栈高速缓存标志中增加进程标识,进程切换时不需要将数据写到低层存储系统中,适用于多进程环境.SPEC CPU2000程序运行结果表明,采用快速地址计算的自适应栈高速缓存方案,25.8%的访存指令可以并行执行,数据高速缓存失效率平均降低9.4%,IPC值平均提高6.9%.  相似文献   

15.
A Data Cube Model for Prediction-Based Web Prefetching   总被引:7,自引:0,他引:7  
Reducing the web latency is one of the primary concerns of Internet research. Web caching and web prefetching are two effective techniques to latency reduction. A primary method for intelligent prefetching is to rank potential web documents based on prediction models that are trained on the past web server and proxy server log data, and to prefetch the highly ranked objects. For this method to work well, the prediction model must be updated constantly, and different queries must be answered efficiently. In this paper we present a data-cube model to represent Web access sessions for data mining for supporting the prediction model construction. The cube model organizes session data into three dimensions. With the data cube in place, we apply efficient data mining algorithms for clustering and correlation analysis. As a result of the analysis, the web page clusters can then be used to guide the prefetching system. In this paper, we propose an integrated web-caching and web-prefetching model, where the issues of prefetching aggressiveness, replacement policy and increased network traffic are addressed together in an integrated framework. The core of our integrated solution is a prediction model based on statistical correlation between web objects. This model can be frequently updated by querying the data cube of web server logs. This integrated data cube and prediction based prefetching framework represents a first such effort in our knowledge.  相似文献   

16.
The web is the largest distributed database deploying time-to-live-based weak consistency. Each object has a lifetime-duration assigned to it by its origin server. A copy of the object fetched from its origin server is received with maximum time-to-live (TTL) that equals its lifetime duration. In contrast a copy obtained through a cache have shorter TTL since the age (elapsed time since fetched from the origin) is deducted from its lifetime duration. A request served by a cache constitutes a hit if the cache has a fresh copy of the object. Otherwise, the request is considered a miss and is propagated to another server. It is evident that the number of cache misses depends on the age of the copies the cache receives. Thus, a cache that sends requests to another cache would suffer more misses than a cache that sends requests directly to an authoritative server.In this paper, we model and analyze the effect of age on the performance of various cache configurations. We consider a low-level cache that fetches objects either from their origin servers or from other caches and analyze its miss-rate as function of its fetching policy. We distinguish between three basic fetching policies, namely, fetching always from the origin, fetching always from the same high-level cache, and fetching from a “random” high-level cache. We explore the relationships between these policies in terms of the miss-rate achieved by the low-level cache, both on worst-case sequences, and on sequences generated using particular probability distributions.Guided by web caching practice, we consider two variations of the basic policies. In the first variation the high-level cache uses pre-term refreshes to keep a copy with lower age. In the second variation the low-level cache uses extended lifetime duration. We analyze how these variations affect the miss-rates. Our theoretical results help to understand how age may affect the miss-rate, and imply guidelines for improving performance of web caches.  相似文献   

17.
In this paper, we propose and analyze a proxy-based hybrid cache management scheme for client-server applications in Mobile IP (MIP) networks. We leverage a per-user proxy as a gateway between the server and the mobile host (MH) such that any communication between the MH and server must pass through the proxy. The proxy has dual responsibilities in our design. It keeps track of the current location of the MH by acting as a regional Gateway Foreign Agent (GFA) as in the MIP Regional Registration protocol for mobility management. The proxy is also responsible for cache consistency management and query processing on behalf of the MH. To reduce the network traffic, a threshold-based hybrid cache consistency management policy is applied. That is, when a data object is updated at the server, the server sends an invalidation report to the MH through the proxy to invalidate the cached data object, provided that the size of the data object exceeds the given threshold. Otherwise, the server sends a fresh copy of the data object through the proxy to the MH. We identify the best “threshold” value that would minimize the overall network traffic incurred due to mobility management, cache consistency management, and query processing, when given a set of parameter values characterizing the operational and workload conditions of the MIP network.  相似文献   

18.
Proxy caches are essential to improve the performance of the World Wide Web and to enhance user perceived latency. Appropriate cache management strategies are crucial to achieve these goals. In our previous work, we have introduced Web object-based caching policies. A Web object consists of the main HTML page and all of its constituent embedded files. Our studies have shown that these policies improve proxy cache performance substantially.In this paper, we propose a new Web object-based policy to manage the storage system of a proxy cache. We propose two techniques to improve the storage system performance. The first technique is concerned with prefetching the related files belonging to a Web object, from the disk to main memory. This prefetching improves performance as most of the files can be provided from the main memory rather than from the proxy disk. The second technique stores the Web object members in contiguous disk blocks in order to reduce the disk access time. We used trace-driven simulations to study the performance improvements one can obtain with these two techniques. Our results show that the first technique by itself provides up to 50% reduction in hit latency, which is the delay involved in providing a hit document by the proxy. An additional 5% improvement can be obtained by incorporating the second technique.  相似文献   

19.
Proxy caching is an effective approach to reduce the response latency to client requests, web server load, and network traffic. Recently there has been a major shift in the usage of the Web. Emerging web applications require increasing amount of server-side processing. Current proxy protocols do not support caching and execution of web processing units. In this paper, we present a weblet environment, in which, processing units on web servers are implemented as weblets. These weblets can migrate from web servers to proxy servers to perform required computation and provide faster responses. Weblet engine is developed to provide the execution environment on proxy servers as well as web servers to facilitate uniform weblet execution. We have conducted thorough experimental studies to investigate the performance of the weblet approach. We modify the industrial standard e-commerce benchmark TPC-W to fit the weblet model and use its workload model for performance comparisons. The experimental results show that the weblet environment significantly improves system performance in terms of client response latency, web server throughput, and workload. Our prototype weblet system also demonstrates the feasibility of integrating weblet environment with current web/proxy infrastructure.  相似文献   

20.
为应用服务器添加Web层静态文件缓存功能   总被引:1,自引:0,他引:1  
为提高访问效率,在内存中开辟一块空间,将访问过的静态文件保存起来,下次再访问该文件的时候直接从内存中返回内容而不从磁盘读取。文章实现了一种基于LRU淘汰算法的静态文件缓存功能。该缓存功能不与特定的服务器相关,具有良好的可移植性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号