首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
集中管理式Web缓存系统及性能分析   总被引:5,自引:0,他引:5  
共享缓存文件是减少网络通信量和服务器负载的重要方法,本文在介绍Web Caching技术及流行的Web缓存通信协议ICP的基础上,提出了一种集中管理式Web缓存系统,该系统通过将用户的HTTP请求,按照一定的算法分发到系统中某一合适的缓存服务器上,从而消除了缓存系统内部服务器之间庞大的通信开销及缓存处理负担,减少了缓存内容的冗余度.通过分析,证明了集中管理式Web缓存系统比基于ICP的简单缓存系统具有缓存效率高、处理开销低、延迟小等优点,并且该系统具有良好的可扩展性.  相似文献   

2.
Leung  K. Y.  Wong  Eric W. M.  Yeung  K. H. 《World Wide Web》2004,7(3):297-314
Content Delivery Networks (CDN) have been used on the Internet to cache media content so as to reduce the load on the original media server, network congestion, and latency. Due to the large size of media content compared to normal web objects, current caching algorithms used in the Internet are no longer suitable. This paper presents a high-performance prefetch system that accommodates user time-varying behavior. A hybrid caching technique, which combines prefetch and replacement algorithms, is also introduced. The robustness of the cache system against imperfect user request information is evaluated using three request noise models. Two prefetch performance indices are also presented to help content administrators in deciding when to update the user request profile for caching algorithms.  相似文献   

3.
CDNs improve network performance and offer fast and reliable applications and services by distributing content to cache servers located close to users. The Web's growth has transformed communications and business services such that speed, accuracy, and availability of network-delivered content has become absolutely critical - both on their own terms and in terms of measuring Web performance. Proxy servers partially address the need for rapid content delivery by providing multiple clients with a shared cache location. In this context, if a requested object exists in a cache (and the cached version has not expired), clients get a cached copy, which typically reduces delivery time. CDNs act as trusted overlay networks that offer high-performance delivery of common Web objects, static data, and rich multimedia content by distributing content load among servers that are close to the clients. CDN benefits include reduced origin server load, reduced latency for end users, and increased throughput. CDNs can also improve Web scalability and disperse flash-crowd events. Here we offer an overview of the CDN architecture and popular CDN service providers.  相似文献   

4.
With the exponential growth of WWW traffic, web proxy caching becomes a critical technique for Internet web services. Well-organized proxy caching systems with multiple servers can greatly reduce the user perceived latency and decrease the network bandwidth consumption. Thus, many research papers focused on improving web caching performance with the efficient coordination algorithms among multiple servers. Hash based algorithm is the most widely used server coordination mechanism, however, there's still a lot of technical issues need to be addressed. In this paper, we propose a new hash based web caching architecture, Tulip. Tulip aggregates web objects that are likely to be accessed together into object clusters and uses object clusters as the primary access units. Tulip extends the locality-based algorithm in UCFS to hash based web proxy systems and proposes a simple algorithm to reduce the data grouping overhead. It takes into consideration the access speed dispatch between memory and disk and replaces expensive small disk I/O with less large ones. In case a client request cannot be fulfilled by the server in the memory, the system fetches the whole cluster which contains the required object into memory, the future requests for other objects in the same cluster can be satisfied directly from memory and slow disk I/Os are avoided. It also introduces a simple and efficient data dupllication algorithm, few maintenance work need to be done in case of server join/leave or server failure. Along with the local caching strategy, Tulip achieves better fault tolerance and load balance capability with the minimal cost. Our simulation results show Tulip has better performance than previous approaches.  相似文献   

5.
This paper describes two exact algorithms for the joint problem of object placement and request routing in a content distribution network (CDN). A CDN is a technology used to efficiently distribute electronic content throughout an existing Internet Protocol network. The problem consists of replicating content on the proxy servers and routing the requests for the content to a suitable proxy server in a CDN such that the total cost of distribution is minimized. An upper bound on end-to-end object transfer time is also taken into account. The problem is formulated as a nonlinear integer programming formulation which is linearized in three different ways. Two algorithms, one based on Benders decomposition and the other based on Lagrangean relaxation and decomposition, are described for the solution of the problem. Computational experiments are conducted by comparing the proposed linearizations and the two algorithms on randomly generated Internet topologies.  相似文献   

6.
As the Internet has become a more central aspect for information technology, so have concerns with supplying enough bandwidth and serving web requests to end users in an appropriate time frame. Web caching was introduced in the 1990s to help decrease network traffic, lessen user perceived lag, and reduce loads on origin servers by storing copies of web objects on servers closer to end users as opposed to forwarding all requests to the origin servers. Since web caches have limited space, web caches must effectively decide which objects are worth caching or replacing for other objects. This problem is known as cache replacement. We used neural networks to solve this problem and proposed the Neural Network Proxy Cache Replacement (NNPCR) method. The goal of this research is to implement NNPCR in a real environment like Squid proxy server. In order to do so, we propose an improved strategy of NNPCR referred to as NNPCR-2. We show how the improved model can be trained with up to twelve times more data and gain a 5–10% increase in Correct Classification Ratio (CCR) than NNPCR. We implemented NNPCR-2 in Squid proxy server and compared it with four other cache replacement strategies. In this paper, we use 84 times more data than NNPCR was tested against and present exhaustive test results for NNPCR-2 with different trace files and neural network structures. Our results demonstrate that NNPCR-2 made important, balanced decisions in relation to the hit rate and byte hit rate; the two performance metrics most commonly used to measure the performance of web proxy caches.  相似文献   

7.
Multimedia proxy plays an important role in multimedia streaming over wireless Internet. Since wireless network exhibits different characteristics from the Internet, multimedia proxy caching over wireless Internet faces additional challenges. In this paper, we present a study of cache replacement for a single server and server selection for multiple servers across wireless Internet. By considering multiple objectives of multimedia proxy, we design a unified cost metric to measure proxy performance in wireless Internet. Based on the defined unified cost metric, we propose a novel replacement algorithm for single-server and a new server-selection policy for multiple servers to improve the end-to-end performance such as throughput, media quality, and start-up latency. To effectively handle errors occurred on wireless link, channel-adaptive unequal error protection is deployed according to distinct quality of service requirements of layered or scalable media. Simulation results demonstrate that our approaches achieve significantly better performance than the known cache-replacement algorithms and sever selection schemes, respectively.  相似文献   

8.
Wireless Mesh Networks (WMNs) extend Internet access in areas where the wired infrastructure is not available. A problem that arises is the congestion around gateways, delayed access latency and low throughput. Therefore, object replication and placement is essential for multi-hop wireless networks. Many replication schemes are proposed for the Internet, but they are designed for CDNs that have both high bandwidth and high server capacity, which makes them unsuitable for the wireless environment. Object replication has received comparatively less attention from the research community when it comes to WMNs. In this paper, we propose an object replication and placement scheme for WMNs. In our scheme, each mesh router acts as a replica server in a peer-to-peer fashion. The scheme exploits graph partitioning to build a hierarchy from fine-grained to coarse-grained partitions. The challenge is to replicate content as close as possible to the requesting clients and thus reduce the access latency per object, while minimizing the number of replicas. Using simulation tests, we demonstrate that our scheme is scalable, performing well with respect to the number of replica servers and the number of objects. The simulation results show that our proposed scheme has better performance compared to other replication schemes.  相似文献   

9.
Proxy servers have been used to cache web objects to alleviate the load of the web servers and to reduce network congestion on the Internet. In this paper, a central video server is connected to a proxy server via wide area networks (WANs) and the proxy server can reach many clients via local area networks (LANs). We assume a video can be either entirely or partially cached in the proxy to reduce WAN bandwidth consumption. Since the storage space and the sustained disk I/O bandwidth are limited resources in the proxy, how to efficiently utilize these resources to maximize the WAN bandwidth reduction is an important issue. We design a progressive video caching policy in which each video can be cached at several levels corresponding to cached data sizes and required WAN bandwidths. For a video, the proxy server determines to cache a smaller amount of data at a lower level or to gradually accumulate more data to reach a higher level. The proposed progressive caching policy allows the proxy to adjust caching amount for each video based on its resource condition and the user access pattern. We investigate the scenarios in which the access pattern is priorly known or unknown and the effectiveness of the caching policy is evaluated.  相似文献   

10.
Proxy caching is an effective approach to reduce the response latency to client requests, web server load, and network traffic. Recently there has been a major shift in the usage of the Web. Emerging web applications require increasing amount of server-side processing. Current proxy protocols do not support caching and execution of web processing units. In this paper, we present a weblet environment, in which, processing units on web servers are implemented as weblets. These weblets can migrate from web servers to proxy servers to perform required computation and provide faster responses. Weblet engine is developed to provide the execution environment on proxy servers as well as web servers to facilitate uniform weblet execution. We have conducted thorough experimental studies to investigate the performance of the weblet approach. We modify the industrial standard e-commerce benchmark TPC-W to fit the weblet model and use its workload model for performance comparisons. The experimental results show that the weblet environment significantly improves system performance in terms of client response latency, web server throughput, and workload. Our prototype weblet system also demonstrates the feasibility of integrating weblet environment with current web/proxy infrastructure.  相似文献   

11.
In this paper, we present a novel technique for the problem of designing a Content Distribution Network (CDN), which is a technology used to efficiently distribute electronic content throughout an existing IP network. Our design proposal consists of jointly deciding on (i) the number and placement of proxy servers on a given set of potential nodes, (ii) replicating content on the proxy servers, and (iii) routing the requests for the content to a suitable proxy server such that the total cost of distribution is minimized. We model the problem using a nonlinear integer programming formulation. The novelty of the proposed formulation lies in simultaneously addressing three interdependent problems in this context as well as explicitly representing the distribution structure of a CDN through the objective function. We offer a linearization for the model, develop an exact solution procedure based on Benders’ decomposition and also utilize a variant of this procedure to accelerate the algorithm. In addition, we provide a fast and efficient heuristic that can be used to obtain near-optimal solutions to the problem. Finally, the paper concludes with computational results showing the performance of the decomposition procedure and the heuristic algorithm on randomly generated Internet topologies.  相似文献   

12.
A Data Cube Model for Prediction-Based Web Prefetching   总被引:7,自引:0,他引:7  
Reducing the web latency is one of the primary concerns of Internet research. Web caching and web prefetching are two effective techniques to latency reduction. A primary method for intelligent prefetching is to rank potential web documents based on prediction models that are trained on the past web server and proxy server log data, and to prefetch the highly ranked objects. For this method to work well, the prediction model must be updated constantly, and different queries must be answered efficiently. In this paper we present a data-cube model to represent Web access sessions for data mining for supporting the prediction model construction. The cube model organizes session data into three dimensions. With the data cube in place, we apply efficient data mining algorithms for clustering and correlation analysis. As a result of the analysis, the web page clusters can then be used to guide the prefetching system. In this paper, we propose an integrated web-caching and web-prefetching model, where the issues of prefetching aggressiveness, replacement policy and increased network traffic are addressed together in an integrated framework. The core of our integrated solution is a prediction model based on statistical correlation between web objects. This model can be frequently updated by querying the data cube of web server logs. This integrated data cube and prediction based prefetching framework represents a first such effort in our knowledge.  相似文献   

13.
针对CDN网络中海量数据分发的要求,设计了一种基于CDN服务器P2P服务的高效数据分发机制,通过对分割后的海量数据分别执行P2P数据分发,克服了现有CDN服务器P2P服务不能根据Internet网络带宽和CDN服务器负载动态变化自适应地调整数据分发过程的缺点。仿真结果表明,提出的海量数据分发机制提高了网络带宽波动对数据分发的性能,能够较好地满足实际CDN网络中海量数据分发的需求。  相似文献   

14.
The growth of web-based applications in business and e-commerce is building up demands for high performance web servers for better throughputs and lower user-perceived latency. These demands are leading to a widespread substitution of powerful single servers by robust newcomers, cluster web servers, in many enterprise companies. In this respect the load-balancing algorithms play an important role in boosting the performance of cluster servers. The previous load-balancing algorithms which were designed for the handling of static contents in web services suffer from significant performance degradation under dynamic and database-driven workloads. Regarding this, we propose an approximation-based load-balancing algorithm with admission control for cluster-based web servers in this study. Since it is difficult to accurately determine the loads of web servers through feedbacks from distributed agents in web servers, we propose an analytical model of a web server to estimate the web servers’ loads. To achieve this, the algorithm classifies requests based on their service times and track numbers of outstanding requests for each class of each web server node and also based on their resource demands to dynamically estimate the loads of each node. For the error handling of the model a proportional integral (PI) controller from control theory is used. Then the estimated available capacity of each web server is used for load balancing and admission control decisions. The implementation results with a standard benchmark confirm the effectiveness of the proposed scheme, which improves both the mean response time and the throughput of the cluster compared to rival load-balancing algorithms, and also avoids situations in which the cluster is overloaded, even when the request rates are beyond the cluster capacity.  相似文献   

15.
《Computer Networks》2000,32(3):261-275
Owing to the fast growth of World Wide Web (WWW), web traffic has become a major component of Internet traffics. Consequently, the reduction of document retrieval latency on WWW becomes more and more important. The latency can be reduced in two ways: reduction of network delay and improvement of web servers’ throughput. Our research aims at improving a web server’s throughput by keeping a memory cache in a web server’s address space.In this paper, we focus on the design and implementation of a memory cache scheme. We propose a novel web cache management policy named the adaptive-level policy that either caches the whole file content or only a portion of it, according to the file size. The experimental results show three things. First, our memory cache is beneficial since, under our experimental workloads, the throughput improvement can achieve 32.7%. Second, our cache management policy is suitable for current web traffic. Third, with the increasing popularity of multimedia files, our policy will outperform others currently used in WWW.  相似文献   

16.
城域网上CDN技术的应用   总被引:4,自引:0,他引:4  
熊明  赵政  赵怿甦 《计算机应用》2005,25(1):196-198
CDN是一个建立并覆盖在互联网,并由分布在不同区域的节点服务器群组成的虚拟网络。CDN采用缓存、复制、负载均衡和客户请求重定向等技术,将信息资源推向网络边缘,使得客户可以从"最近最好"的服务器快速访问到所需的内容,从而提高了终端用户的访问速度。文中简要介绍CDN的概念和技术,包括CDN基本工作原理,内容路由的设计原则和机制比较,节点内容引擎比较。通过CDN的技术方案,为宽带的发展提供一些技术参考。  相似文献   

17.

Video content delivery networks face many challenges such as scalability, quality of service and flexibility. Video suppliers address them through CDN. Cloud computing and Video content Delivery as a Service (VDaaS) plays a key role in improving the content delivery standard and makes the work of content providers, easier. By hosting video contents in the cloud, the content delivery costs are minimized and the overall content delivery performance enhanced by optimization of cloud CDN. Cost optimization of the cloud-based content delivery network requires a focus on delay or throughput, the overall performance and content delivery. The content placement and content access, the QoS and the QoE in CDN can be improved by enhancing the video content delivery performance. In this paper, a unique model for video content delivery, cloud-based is developed, titled as shared storage-based cloud CDN (SS-CCDN) to achieve the objective. This design optimizes through algorithms, the effective placement of video data and dynamic update of video data. For analysis, GA, PSO, and ACO algorithms are used. The proposed model uses direct and assisted push–pull content delivery schemes for cost-efficient content delivery. The low-cost VDaaS model reduces the storage cost, keeps the latency and the traffic cost. Experimental results validate that this model, with regard to storage, traffic, and latency generate higher performance with lower price and satisfy the QoS and QoE aspects in content delivery.

  相似文献   

18.
Iyer  Ravi 《World Wide Web》2004,7(3):259-280
As Internet usage continues to expand rapidly, careful attention needs to be paid to the design of Internet servers for achieving high performance and end-user satisfaction. Currently, the memory system continues to remain a significant performance bottleneck for Internet servers employing multi-GHz processors. In this paper, our aim is two-fold: (1) to characterize the cache/memory performance of web server workloads and (2) to propose and evaluate cache design alternatives for future web servers. We chose SPECweb99 as the representative web server workload and our entire characterization and evaluation methodology is based on our CASPER simulation framework. We begin by exploring the processor cache design space for single and dual-processor servers. Based on our observations, we then evaluate other cache hierarchy alternatives such as chipset caches, coherence filters and decompressed page stores. We show the sensitivity of these components to basic organization parameters such as cache size, line size and degree of associativity. We also present the performance implications of routing memory requests initiated by I/O devices through these caches. Based on detailed simulation data and its implications on system level performance, this paper shows that chipset caches have significant potential for improving future web server performance.  相似文献   

19.
对内容分发网络(CDN)和对等网络(P2P)分别进行了分析对比,指出了它们各自的优缺点,并根据电信运营商主动参与P2P网络(P4P)技术的特点,给出了一种结合P4P、P2P与CDN技术的混合系统的设计方案,以及混合系统中协助CDN节点分发内容节点(伪CDN节点)的选择算法.该算法利用P4P技术获得运营商提供的网络信息,选择合适的边缘节点,贡献出其容量和带宽,为其他节点服务,以减少了系统边缘代理服务器的数量,增大系统容量,同时减少网络骨干网上的负载.模拟实验分析了考虑底层网络情况后,系统在链路花费、时间花费上的改进,结果表明该算法减少了跨网络运营商(ISP)流量,提高了系统性能.  相似文献   

20.
姚源  褚伟 《微机发展》2007,17(9):178-180
文中检测并对比了基于多重描述编码(MDC)的流媒体在对等网(P2P)和内容分发网络(CDN)中的性能。多个服务器同时为一个客户请求提供相同的描述,这样提高了网络传输的可靠性,并增加了服务器的数据传输率。用ns-2网络模拟器实现了这两种方法,实验结果表明:虽然P2P网络存在高度不稳定的情况,但是基于P2P的MDC流视频的质量明显比CDN上的好。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号