首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A case study of Web server benchmarking using parallel WAN emulation   总被引:2,自引:0,他引:2  
Carey  Rob  Martin 《Performance Evaluation》2002,49(1-4):111-127
This paper describes the use of a parallel discrete-event network emulator called the Internet Protocol Traffic and Network Emulator (IP-TNE) for Web server benchmarking. The experiments in this paper demonstrate the feasibility of high-performance wide area network (WAN) emulation using parallel discrete-event simulation (PDES) techniques on a single shared-memory multiprocessor. Our experiments with an Apache Web server achieve up to 8000 HTTP/1.1 transactions/s for static document retrieval across emulated WAN topologies with up to 4096 concurrent Web/TCP clients. The results show that WAN characteristics, including round-trip delays, packet losses, and bandwidth asymmetry, all have significant impacts on Web server performance, as do client protocol behaviors. WAN emulation using the IP-TNE enables stress testing and benchmarking of Web servers in ways that may not be possible in simple local area network (LAN) test scenarios.  相似文献   

2.
Rousskov  Alex  Soloviev  Valery 《World Wide Web》1999,2(1-2):47-67
This paper presents a performance study of the state‐of‐the‐art caching proxy called Squid. We instrumented Squid to measure per request network and disk activities and conducted a series of experiments on large Web caches. We have discovered many interesting and consistent patterns across a wide variety of environments. Our data and analysis are essential for understanding, modeling, benchmarking, and tuning performance of a proxy server. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

3.
Replication of information across a server cluster provides a promising way to support popular Web sites. However, a Web‐server cluster requires some mechanism for the scheduling of requests to the most available server. One common approach is to use the cluster Domain Name System (DNS) as a centralized dispatcher. The main problem is that WWW address caching mechanisms (although reducing network traffic) only let this DNS dispatcher control a very small fraction of the requests reaching the Web‐server cluster. The non‐uniformity of the load from different client domains, and the high variability of real Web workload introduce additional degrees of complexity to the load balancing issue. These characteristics make existing scheduling algorithms for traditional distributed systems not applicable to control the load of Web‐server clusters and motivate the research on entirely new DNS policies that require some system state information. We analyze various DNS dispatching policies under realistic situations where state information needs to be estimated with low computation and communication overhead so as to be applicable to a Web cluster architecture. In a model of realistic scenarios for the Web cluster, a large set of simulation experiments shows that, by incorporating the proposed state estimators into the dispatching policies, the effectiveness of the DNS scheduling algorithms can improve substantially, in particular if compared to the results of DNS algorithms not using adequate state information. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

4.
Iyengar  Arun K.  Squillante  Mark S.  Zhang  Li 《World Wide Web》1999,2(1-2):85-100
In this paper we develop a general methodology for characterizing the access patterns of Web server requests based on a time‐series analysis of finite collections of observed data from real systems. Our approach is used together with the access logs from the IBM Web site for the Olympic Games to demonstrate some of its advantages over previous methods and to construct a particular class of benchmarks for large‐scale heavily‐accessed Web server environments. We then apply an instance of this class of benchmarks to analyze aspects of large‐scale Web server performance, demonstrating some additional problems with methods commonly used to evaluate Web server performance at different request traffic intensities. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

5.
This papers measures and compares the network performance (with respect to packet forwarding) of three popular operating systems when used in today's Gigabit Ethernet networks. Specifically, the paper compares the performance in terms of packet forwarding of Linux, Windows Server and Windows XP. We measure both kernel- and user-level packet forwarding when subjecting hosts to different traffic load conditions. The performance is compared and analyzed in terms of throughput, packet loss, delay, and CPU availability. Our evaluation methodology is based on packet-forwarding measurement which is a standard and popular benchmark and evaluation methodology to assess the performance of network elements such as servers, gateways, routers, and switches. Our evaluation methodology considers different configuration setups and utilizes open-source software tools to generate relatively high traffic rates. We consider today's typical network hosts of modern processors and Gigabit network cards. Our measurements show that in general Linux exhibits superior overall performance in the case of kernel (or IP) packet forwarding, whereas Windows Server exhibits superior performance in the case of user-level packet forwarding.  相似文献   

6.
Eggert  Lars  Heidemann  John 《World Wide Web》1999,2(3):133-142
The current World Wide Web service model treats all requests equivalently, both while being processed by servers and while being transmitted over the network. For some uses, such as Web prefetching or multiple priority schemes, different levels of service are desirable. This paper presents three simple, server‐side, application‐level mechanisms (limiting process pool size, lowering process priorities, limiting transmission rate) to provide two different levels of Web service (regular and low priority). We evaluated the performance of these mechanisms under combinations of two foreground workloads (light and heavy) and two levels of available network bandwidth (10 Mb/s and 100 Mb/s). Our experiments show that even with background traffic sufficient to saturate the network, foreground performance is reduced by at most 4–17%. Thus, our user‐level mechanisms can effectively provide different service classes even in the absence of operating system and network support. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

7.
Replication of information across multiple servers is becoming a common approach to support popular Web sites. A distributed architecture with some mechanisms to assign client requests to Web servers is more scalable than any centralized or mirrored architecture. In this paper, we consider distributed systems in which the Authoritative Domain Name Server (ADNS) of the Web site takes the request dispatcher role by mapping the URL hostname into the IP address of a visible node, that is, a Web server or a Web cluster interface. This architecture can support local and geographical distribution of the Web servers. However, the ADNS controls only a very small fraction of the requests reaching the Web site because the address mapping is not requested for each client access. Indeed, to reduce Internet traffic, address resolution is cached at various name servers for a time-to-live (TTL) period. This opens an entirely new set of problems that traditional centralized schedulers of parallel/distributed systems do not have to face. The heterogeneity assumption on Web node capacity, which is much more likely in practice, increases the order of complexity of the request assignment problem and severely affects the applicability and performance of the existing load sharing algorithms. We propose new assignment strategies, namely adaptive TTL schemes, which tailor the TTL value for each address mapping instead of using a fixed value for all mapping requests. The adaptive TTL schemes are able to address both the nonuniformity of client requests and the heterogeneous capacity of Web server nodes. Extensive simulations show that the proposed algorithms are very effective in avoiding node overload, even for high levels of heterogeneity and limited ADNS control  相似文献   

8.
Analyzing factors that influence end-to-end Web performance   总被引:1,自引:0,他引:1  
Web performance impacts the popularity of a particular Web site or service as well as the load on the network, but there have been no publicly available end-to-end measurements that have focused on a large number of popular Web servers examining the components of delay or the effectiveness of the recent changes to the HTTP protocol. In this paper we report on an extensive study carried out from many client sites geographically distributed around the world to a collection of over 700 servers to which a majority of Web traffic is directed. Our results show that the HTTP/1.1 protocol, particularly with pipelining, is indeed an improvement over existing practice, but that servers serving a small number of objects or closing a persistent connection without explicit notification can reduce or eliminate any performance improvement. Similarly, use of caching and multi-server content distribution can also improve performance if done effectively.  相似文献   

9.
Padmavathi  Poorva   《Computer Networks》2006,50(18):3608-3621
In this paper, we address the server selection problem for streaming applications on the Internet. The architecture we consider is similar to the content distribution networks consisting of geographically dispersed servers and user populations over an interconnected set of metropolitan areas. Server selection issues for Web-based applications in such an environment have been widely addressed; the selection is mostly based on proximity measured using packet delay. Such a greedy or heuristic approach to server selection will not address the capacity planning problem evident in multimedia applications. For such applications, admission control becomes an essential part of their design to maintain Quality of Service (QoS). Our objective in providing a solution to the server selection problem is threefold: first, to direct clients to the nearest server; second, to provide multiple sources to diffuse network load; third, to match server capacity to user demand so that optimal blocking performance can be expected. We accomplish all three objectives by using a special type of Linear Programming (LP) formulation called the Transportation Problem (TP). The objective function in the TP is to minimize the cost of serving a video request from user population x using server y as measured by network distance. The optimal allocation between servers and user populations from TP results in server clusters, the aggregated capacity of each cluster designed to meet the demands of its designated user population. Within a server cluster, we propose streaming protocols for request handling that will result in a balanced load. We implement threshold-based admission control in individual servers within a cluster to enforce the fair share of the server resource to its designated user population. The blocking performance is used as a trigger to find new optimal allocations when blocking rates become unacceptable due to change in user demands. We substantiate the analytical model with an extensive simulation for analyzing the performance of the proposed clustered architecture and the protocols. The simulation results show significant difference in overall blocking performance between optimal and suboptimal allocations in as much as 15% at moderate to high workloads. We also show that the proposed cluster protocols result in lower packet loss and latencies by forcing path diversity from multiple sources for request delivery.  相似文献   

10.
WorldWideWeb作为全球范围的分布式信息系统,其使用日益广泛,大量交互式应用的出现使用户对响应时间有较高的要求,服务器文档预送和非服务器站点的缓冲机制配合使用,可以显著改进Web服务的响应性能,但是,由于预送增加了网络传输的阵发性,增加了报文重传和丢失,加大了网络排队延迟和平均队列长度,带来网络性能的下降,为了保证预送对用户质量(QoS)的积极作用,同时减少对网络的负面影响,我们提出平滑的  相似文献   

11.
Proxy caching is an effective approach to reduce the response latency to client requests, web server load, and network traffic. Recently there has been a major shift in the usage of the Web. Emerging web applications require increasing amount of server-side processing. Current proxy protocols do not support caching and execution of web processing units. In this paper, we present a weblet environment, in which, processing units on web servers are implemented as weblets. These weblets can migrate from web servers to proxy servers to perform required computation and provide faster responses. Weblet engine is developed to provide the execution environment on proxy servers as well as web servers to facilitate uniform weblet execution. We have conducted thorough experimental studies to investigate the performance of the weblet approach. We modify the industrial standard e-commerce benchmark TPC-W to fit the weblet model and use its workload model for performance comparisons. The experimental results show that the weblet environment significantly improves system performance in terms of client response latency, web server throughput, and workload. Our prototype weblet system also demonstrates the feasibility of integrating weblet environment with current web/proxy infrastructure.  相似文献   

12.
State-of-the-art cluster-based data centers consisting of three tiers (Web server, application server, and database server) are being used to host complex Web services such as e-commerce applications. The application server handles dynamic and sensitive Web contents that need protection from eavesdropping, tampering, and forgery. Although the secure sockets layer (SSL) is the most popular protocol to provide a secure channel between a client and a cluster-based network server, its high overhead degrades the server performance considerably and, thus, affects the server scalability. Therefore, improving the performance of SSL-enabled network servers is critical for designing scalable and high-performance data centers. In this paper, we examine the impact of SSL offering and SSL-session-aware distribution in cluster-based network servers. We propose a back-end forwarding scheme, called ssl_with_bf, that employs a low-overhead user-level communication mechanism like virtual interface architecture (VIA) to achieve a good load balance among server nodes. We compare three distribution models for network servers, round robin (RR), ssl_with_session, and ssl_with_bf, through simulation. The experimental results with 16-node and 32-node cluster configurations show that, although the session reuse of ssl_with_session is critical to improve the performance of application servers, the proposed back-end forwarding scheme can further enhance the performance due to better load balancing. The ssl_with_bf scheme can minimize the average latency by about 40 percent and improve throughput across a variety of workloads.  相似文献   

13.
The Internet supports three communication paradigms. The first, unicast, is the point-to-point flow of packets between a single source (client) and destination (server) host. Web browsing and file Me transfer are unicast applications. The next, multicast, is the point-to-multipoint flow of packets between a single source host and one or more destination hosts. Broadcast-style videoconferencing, for example, employs IP multicast. Anycast is the point-to-point flow of packets between a single client and the "nearest" destination server identified by an anycast address. The idea behind anycast is that a client wants to send packets to any one of several possible servers offering a particular service or application but does not really care which one. Any number of servers can be assigned a single anycast address within an anycast group. A client sends packets to an anycast server by placing the anycast address in the packet header. Routers then attempt to deliver the packet to a server with the matching anycast address  相似文献   

14.
目的 为使用户随时随地获得需要较大计算和存储资源的交互真实感体渲染服务,设计并实现了一种面向Web的远程真实感体渲染方法。方法 计算量较大的实时体渲染任务由远端渲染服务器中的GPU加速完成并通过WebSocket将渲染图像发送至客户端;客户端浏览器只需负责接收显示渲染图像并监听发送用户交互事件。提出了一种输出系统耦合算法用以连接输出图像速率较大的渲染服务器和发送图像速率较慢的Web服务器。算法能根据Web服务器发送图像的情况动态调整每次图像输出的迭代计算次数,改变渲染服务器输出图像的时间间隔以达到与Web服务器发送速度相平衡,同时保持渲染服务器持续工作。结果 实验比较了在不同网络传输条件下,采用输出系统耦合算法与直接连接渲染器和Web服务器,渲染4个不同数据集所需的完成时间及帧率等性能评价指标。在局域网和广域网环境下,本文方法分别最多只需17 s和14 s即可完成整个渲染过程,而采用直接连接渲染器和Web服务器的方法则分别至少需要31 s和60 s才能完成整个渲染过程。实验结果表明采用输出系统耦合算法在不同网络条件下均可较大地缩短整个渲染过程所需时间,使用户在较短时间内获得高质量渲染图像。结论 本文渲染器与Web服务器耦合实现远程体渲染交互优化的方法可让用户使用与计算能力无关的桌面或移动设备通过网络透明使用高性能渲染系统;系统采用的输出系统耦合算法能够根据网络承载能力自适应调整渲染器输出速度,使用户在不同的网络环境中均可以较快的速度获得高质量渲染图像。  相似文献   

15.
We describe the design of a system for fast and reliable HTTP service which we call Web++. Web++ achieves high reliability by dynamically replicating web data among multiple web servers. Web++ selects the available server that is expected to provide the fastest response time. Furthermore, Web++ guarantees data delivery given that at least one server containing the requested data is available. After detecting a server failure, Web++ client requests are satisfied transparently to the user by another server. Furthermore, the Web++ architecture is flexible enough for implementing additional performance optimizations. We describe implementation of one such optimization, batch resource transmission, whereby all resources embedded in an HTML page that are not cached by the client are sent to the client in a single response. Web++ is built on top of the standard HTTP protocol and does not require any changes either in existing web browsers or the installation of any software on the client side. In particular, Web++ clients are dynamically downloaded to web browsers as signed Java applets. We implemented a Web++ prototype; performance experiments indicate that the Web++ system with 3 servers improves the response time perceived by clients on average by 36.6%, and in many cases by as much as 59%, when compared with the current web performance. In addition, we show that batch resource transmission can improve the response time on average by 39% for clients with fast network connections and 21% for the clients with 56 Kb modem connections. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

16.
提出了一种基于瘦客户机计算模式的无线浏览器模型,利用显示逻辑与处理过程分离的机制,使得浏览器客户端能够获得服务器计算能力的支援,以降低客户端负载。为控制网络流量,研究了模型中的计算任务划分策略,采用了半集中计算的模式,使网页在客户端渲染,实现了一个基于该模型的无线浏览器系统原型。实验结果表明,该模型能够同时降低客户端计算负载和网络传输负载,减少页面内延时,可提供无缝的网络浏览体验。  相似文献   

17.
基于事件驱动和QoS控制的Web服务器体系结构*   总被引:1,自引:1,他引:0  
Web服务器在满足大量且不断上升的用户需求方面扮演着关键角色,但Web服务器也面临高并发的用户请求、不同的QoS需求和访问控制等众多挑战。针对这些挑战提出了一种采用事件驱动和QoS控制的新型Web服务器体系结构,并依此开发出命名为Epdds的原型服务器。通过对Epdds服务器的性能分析及与其他Web服务器性能对比,进一步验证了此种Web服务器体系结构的可行性和有效性。  相似文献   

18.
In conventional video-on-demand systems, video data are stored in a video server for delivery to multiple receivers over a communications network. The video server's hardware limits the maximum storage capacity as well as the maximum number of video sessions that can concurrently be delivered. Clearly, these limits will eventually be exceeded by the growing need for better video quality and larger user population. This paper studies a parallel video server architecture that exploits server parallelism to achieve incremental scalability. First, unlike data partition and replication, the architecture employs data striping at the server level to achieve fine-grain load balancing across multiple servers. Second, a client-pull service model is employed to eliminate the need for interserver synchronization. Third, an admission-scheduling algorithm is proposed to further control the instantaneous load at each server so that linear scalability can be achieved. This paper analyzes the performance of the architecture by deriving bounds for server service delay, client buffer requirement, prefetch delay, and scheduling delay. These performance metrics and design tradeoffs are further evaluated using numerical examples. Our results show that the proposed parallel video server architecture can be linearly scaled up to more concurrent users simply by adding more servers and redistributing the video data among the servers  相似文献   

19.
A network agent located at the junction of wired and wireless networks can provide additional feedback information to streaming media servers to supplement feedbacks from clients. Specifically, it has been shown that feedbacks from the network agent have lower latency, and they can be used in conjunction with client feedbacks to effect proper congestion control. In this work, we propose the double-feedback streaming agent (DFSA) which further allows the detection of discrepancies in the transmission constraints of the wired and wireless networks. By working together with the streaming server and client, DFSA reduces overall packet losses by exploiting the excess capacity Of the path with more capacity. We show how DFSA can be used to support three modes of operation tailored for different delay requirements of streaming applications. Simulation results under high wireless latency show significant improvement of media quality using DFSA over non-agent-based and earlier agent-based streaming systems.  相似文献   

20.
Songqing Chen  Xiaodong Zhang 《Software》2004,34(14):1381-1395
The amount of dynamic Web contents and secured e‐commerce transactions has been dramatically increasing on the Internet, where proxy servers between clients and Web servers are commonly used for the purpose of sharing commonly accessed data and reducing Internet traffic. A significant and unnecessary Web access delay is caused by the overhead in proxy servers to process two types of accesses, namely dynamic Web contents and secured transactions, not only increasing response time, but also raising some security concerns. Conducting experiments on Squid proxy 2.3STABLE4, we have quantified the unnecessary processing overhead to show its significant impact on increased client access response times. We have also analyzed the technical difficulties in eliminating or reducing the processing overhead and the security loopholes based on the existing proxy structure. In order to address these performance and security concerns, we propose a simple but effective technique from the client side that adds a detector interfacing with a browser. With this detector, a standard browser, such as the Netscape/Mozilla, will have simple detective and scheduling functions, called a detective browser. Upon an Internet request from a user, the detective browser can immediately determine whether the requested content is dynamic or secured. If so, the browser will bypass the proxy and forward the request directly to the Web server; otherwise, the request will be processed through the proxy. We implemented a detective browser prototype in Mozilla version 0.9.7, and tested its functionality and effectiveness. Since we have simply moved the necessary detective functions from a proxy server to a browser, the detective browser introduces little overhead to Internet accessing, and our software can be patched to existing browsers easily. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号