首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
High-performance Web sites rely on Web server `farms', hundreds of computers serving the same content, for scalability, reliability, and low-latency access to Internet content. Deploying these scalable farms typically requires the power of distributed or clustered file systems. Building Web server farms on file systems complements hierarchical proxy caching. Proxy caching replicates Web content throughout the Internet, thereby reducing latency from network delays and off-loading traffic from the primary servers. Web server farms scale resources at a single site, reducing latency from queuing delays. Both technologies are essential when building a high-performance infrastructure for content delivery. The authors present a cache consistency model and locking protocol customized for file systems that are used as scalable infrastructure for Web server farms. The protocol takes advantage of the Web's relaxed consistency semantics to reduce latencies and network overhead. Our hybrid approach preserves strong consistency for concurrent write sharing with time-based consistency and push caching for readers (Web servers). Using simulation, we compare our approach to the Andrew file system and the sequential consistency file system protocols we propose to replace  相似文献   

2.
一个基于集中管理的协作式Web缓存系统   总被引:12,自引:2,他引:10  
共享不同代理的缓存Web文档是减少通信量和减轻网络瓶颈的重要方法。在分析原有Web缓存通信协议(ICP)的基础上,提出了一种新的协作Web缓存系统(CMCS)并作了分析比较。通过将HTTP请求均匀分散到系统各个代理,消除了代理之间庞大的通信开销以及由此带来的处理负担。在动态变化的网络环境下,有效地将各个代理组织起来,处理来自服务器的文档。另外也克服了以往每个代理里有大量冗余内容,造成各个代理内容趋向的情形。  相似文献   

3.
Replication of information across multiple servers is becoming a common approach to support popular Web sites. A distributed architecture with some mechanisms to assign client requests to Web servers is more scalable than any centralized or mirrored architecture. In this paper, we consider distributed systems in which the Authoritative Domain Name Server (ADNS) of the Web site takes the request dispatcher role by mapping the URL hostname into the IP address of a visible node, that is, a Web server or a Web cluster interface. This architecture can support local and geographical distribution of the Web servers. However, the ADNS controls only a very small fraction of the requests reaching the Web site because the address mapping is not requested for each client access. Indeed, to reduce Internet traffic, address resolution is cached at various name servers for a time-to-live (TTL) period. This opens an entirely new set of problems that traditional centralized schedulers of parallel/distributed systems do not have to face. The heterogeneity assumption on Web node capacity, which is much more likely in practice, increases the order of complexity of the request assignment problem and severely affects the applicability and performance of the existing load sharing algorithms. We propose new assignment strategies, namely adaptive TTL schemes, which tailor the TTL value for each address mapping instead of using a fixed value for all mapping requests. The adaptive TTL schemes are able to address both the nonuniformity of client requests and the heterogeneous capacity of Web server nodes. Extensive simulations show that the proposed algorithms are very effective in avoiding node overload, even for high levels of heterogeneity and limited ADNS control  相似文献   

4.
Integrating Web Prefetching and Caching Using Prediction Models   总被引:2,自引:0,他引:2  
Yang  Qiang  Zhang  Henry Hanning 《World Wide Web》2001,4(4):299-321
Web caching and prefetching have been studied in the past separately. In this paper, we present an integrated architecture for Web object caching and prefetching. Our goal is to design a prefetching system that can work with an existing Web caching system in a seamless manner. In this integrated architecture, a certain amount of caching space is reserved for prefetching. To empower the prefetching engine, a Web-object prediction model is built by mining the frequent paths from past Web log data. We show that the integrated architecture improves the performance over Web caching alone, and present our analysis on the tradeoff between the reduced latency and the potential increase in network load.  相似文献   

5.
Belloum  A.  Hertzberger  L.O.  Muller  H. 《World Wide Web》2001,4(4):255-275
Web caches are traditionally organised in a simple tree like hierarchy. In this paper, a new architecture is proposed, where federations of caches are distributed globally, caching data partially. The advantages of the proposed system are that contention on global caches is reduced, while at the same time improving the scalability of the system since extra cache resources can be added on the fly. Among other topics discussed in this papers, is the scalability of the proposed system, the algorithms used to control the federation of Web caches and the approach used to identify the potential Web cache partners. In order to obtain a successful collaborative Web caching system, the formation of federations must be controlled by an algorithm that takes the dynamics of the Internet traffic into consideration. We use the history of Web cache access in order to determine how federations should be formed. Initial performance results of a simulation of a number of nodes are promising.  相似文献   

6.
With the ever-growing web traffic, cluster-based web servers have become very important to the Internet infrastructure. Thus, making the best use of all available resources in the cluster to achieve high performance is a significant research issue. In this paper, we present Weblins, a cluster-based web server that can achieve good throughput. Weblins has Gobelins operating system as platform. Gobelins is an efficient single system image operating system that transparently makes use of the resources available in the cluster. The architecture of Weblins is fully distributed. Weblins implements a content-aware request distribution policy via a new interface on top of Gobelins. Popular web files are dynamically replicated on all nodes via a cooperative caching mechanism. For the non-popular files, the requests are handed-off to the corresponding nodes via the TCP Handoff protocol. Simulation results show that the strategy used by Weblins is more suitable for cluster-based Web severs in comparison with pure content-aware strategy and pure cooperative caching strategy.  相似文献   

7.
Proxy caching is an effective approach to reduce the response latency to client requests, web server load, and network traffic. Recently there has been a major shift in the usage of the Web. Emerging web applications require increasing amount of server-side processing. Current proxy protocols do not support caching and execution of web processing units. In this paper, we present a weblet environment, in which, processing units on web servers are implemented as weblets. These weblets can migrate from web servers to proxy servers to perform required computation and provide faster responses. Weblet engine is developed to provide the execution environment on proxy servers as well as web servers to facilitate uniform weblet execution. We have conducted thorough experimental studies to investigate the performance of the weblet approach. We modify the industrial standard e-commerce benchmark TPC-W to fit the weblet model and use its workload model for performance comparisons. The experimental results show that the weblet environment significantly improves system performance in terms of client response latency, web server throughput, and workload. Our prototype weblet system also demonstrates the feasibility of integrating weblet environment with current web/proxy infrastructure.  相似文献   

8.
一种基于分散协作的Web缓存集群体系结构   总被引:1,自引:0,他引:1  
Web对象缓存技术是一种减少Web访问通信量和访问延迟的重要手段,该文通过分析现有的各种Web缓存系统,提出了一种基于分散协作的Web缓存集群体系结构。该体系结构克服了集中式系统需要额外配备一台管理服务器的缺陷,消除了管理服务器瓶颈失效造成系统瘫痪的危险,减少由于管理服务器带来的延迟;同时消除了分散系统的缓存不命中情况下的多级转发的延迟和缓存内容重叠,提高了资源利用率和系统效率,具有良好的可扩展性和健壮性。  相似文献   

9.
周刚  周建国  晏蒲柳 《计算机应用》2006,26(3):733-0735
提出了一种新的基于连续哈希函数的合作式缓存系统。针对传统合作式缓存系统中多级转发造成的高时延和多重哈希计算问题,设计了一种高效的Web对象定位和路由模式,保证任意Web请求只需计算一次哈希且至多经过一次转发就可到达目标节点。采用失效-触发的策略来解决路由表一致性维护的问题,减少了网络开销,提高了系统的可扩展性和可靠性。仿真实验表明,该系统性能优于基于互联网缓存协议和缓存阵列路由协议的系统。  相似文献   

10.
Because of the rapid growth of the World Wide Web and the popularization of smart phones, tablets and personal computers, the number of web service users is increasing rapidly. As a result, large web services require additional disk space, and the required disk space increases with the number of web service users. Therefore, it is important to design and implement a powerful network file system for large web service providers. In this paper, we present three design issues for scalable network file systems. We use a variable number of objects within a bucket to decrease internal fragmentation in small files. We also propose a free space and access load-balancing mechanism to balance overall loading on the bucket servers. Finally, we propose a mechanism for caching frequently accessed data to lower the total disk I/O. These proposed mechanisms can effectively improve scalable network file system performance for large web services.  相似文献   

11.
12.
Web caching: A way to improve web QoS   总被引:4,自引:0,他引:4       下载免费PDF全文
As the Internet and World Wide Web grow at a fast pace, it is essential that the Web‘s performance should keep up with increased demand and expectations. Web Caching technology has.been widely accepted as one of the effective approaches to alleviating Web traffic and increase the Web Quality of Service (QoS). This paper provides an up-to-date survey of the rapidly expanding Web Caching literature. It discusses the state-of-the-art web caching schemes and techniques, with emphasis on the recent developments in Web Caching technology such as the differentiated Web services, heterogeneous caching network structures, and dynamic content caching.  相似文献   

13.
Analyzing factors that influence end-to-end Web performance   总被引:1,自引:0,他引:1  
Web performance impacts the popularity of a particular Web site or service as well as the load on the network, but there have been no publicly available end-to-end measurements that have focused on a large number of popular Web servers examining the components of delay or the effectiveness of the recent changes to the HTTP protocol. In this paper we report on an extensive study carried out from many client sites geographically distributed around the world to a collection of over 700 servers to which a majority of Web traffic is directed. Our results show that the HTTP/1.1 protocol, particularly with pipelining, is indeed an improvement over existing practice, but that servers serving a small number of objects or closing a persistent connection without explicit notification can reduce or eliminate any performance improvement. Similarly, use of caching and multi-server content distribution can also improve performance if done effectively.  相似文献   

14.
基于标记的缓存协作分布式Web服务器系统   总被引:3,自引:0,他引:3       下载免费PDF全文
林曼筠  钱华林 《软件学报》2003,14(1):117-123
介绍了提高Web服务器性能的前沿技术--分布式Web服务器系统,讨论了现有各种方案的优缺点,在此基础上提出一种新的分布式Web服务器系统.该系统使用基于标记的缓存协作用户请求分发方法(tag based cache cooperative Web requests distribution,简称TB-CCRD),通过前端机把系统中各个Web服务器的缓存组织成一个大的虚拟缓存系统,提高系统的整体缓存命中率,缩短了请求的响应时间;通过分布式处理TCP连接转交来消除前端机的性能瓶颈;利用标记通告URL在缓存中的位置,避免了额外的系统内部通信.从而得到了一个可扩展的高性能分布式Web服务器系统.  相似文献   

15.
基于J2EE平台集群服务的分布式缓存队列模型   总被引:1,自引:0,他引:1  
周敬利  李福寿  余胜生 《计算机工程》2005,31(4):100-101,191
集群缓存足提高J2FE(Java 2 Entcrprisc Edition)应用程序可扩展性能的关键技术,但目前J2EE提供的集群技术对于集群缓存的性能处理还存在一定的不足,该文在分析、12EE集群缓存技术的应用基础上,提出一种分布式缓存队列模型,可以有效地解决集群中的可靠性、可扩展性和失败转发等问题。实验表明,采用分布式缓存队列模型能提高集群的访问效率,是一种可行的解决方案。  相似文献   

16.
Design, implementation, and evaluation of differentiated caching services   总被引:3,自引:0,他引:3  
With the dramatic explosion of online information, the Internet is undergoing a transition from a data communication infrastructure to a global information utility. PDAs, wireless phones, Web-enabled vehicles, modem PCs, and high-end workstations can be viewed as appliances that "plug-in" to this utility for information. The increasing diversity of such appliances calls for an architecture for performance differentiation of information access. The key performance accelerator on the Internet is the caching and content distribution infrastructure. While many research efforts addressed performance differentiation in the network and on Web servers, providing multiple levels of service in the caching system has received much less attention. It has two main contributions. First, we describe, implement, and evaluate an architecture for differentiated content caching services as a key element of the Internet content distribution architecture. Second, we describe a control-theoretical approach that lays well-understood theoretical foundations for resource management to achieve performance differentiation in proxy caches. An experimental study using the Squid proxy cache shows that differentiated caching services provide significantly better performance to the premium content classes.  相似文献   

17.
As the Internet has become a more central aspect for information technology, so have concerns with supplying enough bandwidth and serving web requests to end users in an appropriate time frame. Web caching was introduced in the 1990s to help decrease network traffic, lessen user perceived lag, and reduce loads on origin servers by storing copies of web objects on servers closer to end users as opposed to forwarding all requests to the origin servers. Since web caches have limited space, web caches must effectively decide which objects are worth caching or replacing for other objects. This problem is known as cache replacement. We used neural networks to solve this problem and proposed the Neural Network Proxy Cache Replacement (NNPCR) method. The goal of this research is to implement NNPCR in a real environment like Squid proxy server. In order to do so, we propose an improved strategy of NNPCR referred to as NNPCR-2. We show how the improved model can be trained with up to twelve times more data and gain a 5–10% increase in Correct Classification Ratio (CCR) than NNPCR. We implemented NNPCR-2 in Squid proxy server and compared it with four other cache replacement strategies. In this paper, we use 84 times more data than NNPCR was tested against and present exhaustive test results for NNPCR-2 with different trace files and neural network structures. Our results demonstrate that NNPCR-2 made important, balanced decisions in relation to the hit rate and byte hit rate; the two performance metrics most commonly used to measure the performance of web proxy caches.  相似文献   

18.
Previous studies indicate that I/O could become a performance bottleneck in commodity PC-based cluster Web servers. Current local native file systems do not work well for expensive file I/Os while specialized file systems have a limitation on portability. In this paper, we present a lightweight, collaborative temporary file system (CTFS) to improve disk I/O performance for clustered Web servers. CTFS employs several techniques to achieve high-performance, good scalability and portability: (1) a lightweight local temporal file system at each node, (2) using Remote Direct Memory Access (RDMA) to improve intra-cluster communication performance, and (3) a location-aware summary cache for scalable file-to-server lookup. Comprehensive trace-driven simulation experiments conclude that CTFS achieves up to a 37% better system throughput and reduces up to 47% total disk I/O latency than a local asynchronous FFS solution.  相似文献   

19.
With the exponential growth of WWW traffic, web proxy caching becomes a critical technique for Internet web services. Well-organized proxy caching systems with multiple servers can greatly reduce the user perceived latency and decrease the network bandwidth consumption. Thus, many research papers focused on improving web caching performance with the efficient coordination algorithms among multiple servers. Hash based algorithm is the most widely used server coordination mechanism, however, there's still a lot of technical issues need to be addressed. In this paper, we propose a new hash based web caching architecture, Tulip. Tulip aggregates web objects that are likely to be accessed together into object clusters and uses object clusters as the primary access units. Tulip extends the locality-based algorithm in UCFS to hash based web proxy systems and proposes a simple algorithm to reduce the data grouping overhead. It takes into consideration the access speed dispatch between memory and disk and replaces expensive small disk I/O with less large ones. In case a client request cannot be fulfilled by the server in the memory, the system fetches the whole cluster which contains the required object into memory, the future requests for other objects in the same cluster can be satisfied directly from memory and slow disk I/Os are avoided. It also introduces a simple and efficient data dupllication algorithm, few maintenance work need to be done in case of server join/leave or server failure. Along with the local caching strategy, Tulip achieves better fault tolerance and load balance capability with the minimal cost. Our simulation results show Tulip has better performance than previous approaches.  相似文献   

20.
This paper proposes a novel contribution in Web caching area, especially in Web cache replacement, so-called intelligent client-side Web caching scheme (ICWCS). This approach is developed by splitting the client-side cache into two caches: short-term cache that receives the Web objects from the Internet directly, and long-term cache that receives the Web objects from the short-term cache. The objects in short-term cache are removed by least recently used (LRU) algorithm as short-term cache is full. More significantly, when the long-term cache saturates, the neuro-fuzzy system is employed efficiently in managing contents of the long-term cache. The proposed solution is validated by implementing trace-driven simulation and the results are compared with least recently used (LRU) and least frequently used (LFU) algorithms; the most common policies of evaluating Web caching performance. The simulation results have revealed that the proposed approach improves the performance of Web caching in terms of hit ratio (HR), up to 14.8% and 17.9% over LRU and LFU. In terms of byte hit ratio (BHR), the Web caching performance is improved up to 2.57% and 26.25%, and for latency saving ratio (LSR), the performance is better with 8.3% and 18.9% over LRU and LFU, respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号