首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 18 毫秒
1.
A case study of Web server benchmarking using parallel WAN emulation   总被引:2,自引:0,他引:2  
Carey  Rob  Martin 《Performance Evaluation》2002,49(1-4):111-127
This paper describes the use of a parallel discrete-event network emulator called the Internet Protocol Traffic and Network Emulator (IP-TNE) for Web server benchmarking. The experiments in this paper demonstrate the feasibility of high-performance wide area network (WAN) emulation using parallel discrete-event simulation (PDES) techniques on a single shared-memory multiprocessor. Our experiments with an Apache Web server achieve up to 8000 HTTP/1.1 transactions/s for static document retrieval across emulated WAN topologies with up to 4096 concurrent Web/TCP clients. The results show that WAN characteristics, including round-trip delays, packet losses, and bandwidth asymmetry, all have significant impacts on Web server performance, as do client protocol behaviors. WAN emulation using the IP-TNE enables stress testing and benchmarking of Web servers in ways that may not be possible in simple local area network (LAN) test scenarios.  相似文献   

2.
We describe the design of a system for fast and reliable HTTP service which we call Web++. Web++ achieves high reliability by dynamically replicating web data among multiple web servers. Web++ selects the available server that is expected to provide the fastest response time. Furthermore, Web++ guarantees data delivery given that at least one server containing the requested data is available. After detecting a server failure, Web++ client requests are satisfied transparently to the user by another server. Furthermore, the Web++ architecture is flexible enough for implementing additional performance optimizations. We describe implementation of one such optimization, batch resource transmission, whereby all resources embedded in an HTML page that are not cached by the client are sent to the client in a single response. Web++ is built on top of the standard HTTP protocol and does not require any changes either in existing web browsers or the installation of any software on the client side. In particular, Web++ clients are dynamically downloaded to web browsers as signed Java applets. We implemented a Web++ prototype; performance experiments indicate that the Web++ system with 3 servers improves the response time perceived by clients on average by 36.6%, and in many cases by as much as 59%, when compared with the current web performance. In addition, we show that batch resource transmission can improve the response time on average by 39% for clients with fast network connections and 21% for the clients with 56 Kb modem connections. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

3.
熊智  郭成城 《计算机工程》2008,34(5):110-112
HTTP/1.1的持续连接特性会给基于内容请求分发的Web集群服务器带来额外的开销。为减少这种开销,可将用户经常一起访问的网页组成簇并以簇为单位来分布文档。如何衡量网页间的距离是网页组簇的关键问题。该文提出一种基于马尔可夫链的衡量网页间距离的方法,该方法同时考虑了用户访问的时间相关性和用户的访问路径。实例表明,与基于时间相关性的衡量网页间距离的方法相比,采用该衡量方法能更有效地减少网页组簇后HTTP/1.1持续连接所带来的额外开销。  相似文献   

4.
The founder of Agranat Systems examines the design issues involved in engineering effective Web technologies for embedded systems. Small embedded TCP/IP stacks and Web server software now make it possible to manufacture reliable, inexpensive Web-enabled devices across many industries and markets. Embedded systems require Web servers that are designed to minimize memory footprint and avoid interference with mission-critical and real-time applications. To guarantee a reliable user interface with minimal impact on system performance, the server software should utilize the latest HTTP 1.1 standards from the Internet Engineering Task Force. It won't be long before intelligent devices worldwide will be nodes on a network and managed from Web browsers  相似文献   

5.
Zari  M. Saiedian  H. Naeem  M. 《Computer》2001,34(12):30-37
Slow performance costs e-commerce Web sites as much as $4.35 billion annually in lost revenue. Perceived latency-the amount of time between when a user issues a request and receives a response-is a critical issue. Research into improving performance falls into two categories: work on servers and work on networks and protocols. On the server side, previous work has focused on techniques for improving server performance. Such studies show how Web servers behave under a range of loads. These studies often suggest enhancements to application implementations and the operating systems those servers run. On the network side, research has focused on improving network infrastructure performance for Internet applications. Studies focusing on network dynamics have resulted in several enhancements to HTTP, including data compression, persistent connections, and pipelining. These improvements are all part of HTTP 1.1. However, little work has been done on common latency sources that cause the overall delays that frustrate end users. The future of performance improvements lies in developing additional techniques to help implement efficient, scalable, and stable improvements that enhance the end-user experience  相似文献   

6.
就Ajax技术中的XMLHttp对象在基于web的嵌入式远程控制与实时监测系统中的应用开发进行了研究,并基于MicrochipTCP/IP协议栈给出了具体的实现方法,有效地解决了客户端Web与嵌入式HTTP服务器的交互问题。  相似文献   

7.
To support Web clusters with efficient dispatching mechanisms and policies, we propose a light‐weight TCP connection transfer mechanism, TCP Rebuilding, and use it to develop a content‐aware request dispatching platform, LVS‐CAD, in which the request dispatcher can extract and analyze the content in requests and then dispatch each request by its content or type of service requested. To efficiently support HTTP/1.1 persistent connection in Web clusters, request scheduling should be performed per request rather than per connection. Consequently, multiple TCP Rebuilding, as an extension to normal TCP Rebuilding, is proposed and implemented. On this platform, we also devise fast TCP module handshaking to process the handshaking between clients and the request dispatcher in the IP layer instead of in the TCP layer for faster response times. Furthermore, we also propose content‐aware request distribution policies that consider cache locality and various types of costs for dispatching requests in this platform, which makes the resourceutilization of Web servers more effective. Experimental results of a practical implementation on Linux show that the proposed system, mechanisms, and policies can effectively improve the performance of Web clusters. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

8.
Analyzing factors that influence end-to-end Web performance   总被引:1,自引:0,他引:1  
Web performance impacts the popularity of a particular Web site or service as well as the load on the network, but there have been no publicly available end-to-end measurements that have focused on a large number of popular Web servers examining the components of delay or the effectiveness of the recent changes to the HTTP protocol. In this paper we report on an extensive study carried out from many client sites geographically distributed around the world to a collection of over 700 servers to which a majority of Web traffic is directed. Our results show that the HTTP/1.1 protocol, particularly with pipelining, is indeed an improvement over existing practice, but that servers serving a small number of objects or closing a persistent connection without explicit notification can reduce or eliminate any performance improvement. Similarly, use of caching and multi-server content distribution can also improve performance if done effectively.  相似文献   

9.
《Computer Networks》1999,31(11-16):1563-1577
This paper presents a study of Web content adaptation to improve server overload performance, as well as an implementation of a Web content adaptation software prototype. When the request rate on a Web server increases beyond server capacity, the server becomes overloaded and unresponsive. The TCP listen queue of the server's socket overflows exhibiting a drop-tail behavior. As a result, clients experience service outages. Since clients typically issue multiple requests over the duration of a session with the server, and since requests are dropped indiscriminately, all clients connecting to the server at overload are likely to experience connection failures, even though there may be enough capacity on the server to deliver all responses properly for a subset of clients. In this paper, we propose to resolve the overload problem by adapting delivered content to load conditions to alleviate overload. The premise is that successful delivery of a less resource intensive content under overload is more desirable to clients than connection rejection or failures.The paper suggests the feasibility of content adaptation from three different viewpoints; (a) potential for automating content adaptation with minimal involvement of the content provider, (b) ability to achieve sufficient savings in resource requirements by adapting present-day Web content while preserving adequate information, and (c) feasibility to apply content adaptation technology on the Web with no modification to existing Web servers, browsers or the HTTP protocol.  相似文献   

10.
云计算和移动互联网的高速发展, 使得云端服务器需要同时和大规模客户端保持实时交互, XMPP消息通信技术使用基于TCP长连接的方式来实现这一功能. 然而, 现有的XMPP服务器系统大多基于传统的并发模型设计, 整体性能较差, 无法应对大规模并发的需求. 本文针对XMPP服务器的特点, 提出了一种基于Actor模型的XMPP服务器架构设计, 并给出了一种基于一致性哈希的分布式消息路由算法, 有效提升了系统的并发度、弹性扩展能力, 以及消息传递的效率. 实验表明基于本文方法实现的系统相比于现有其他系统, 性能有很大提升, 可以适应大规模并发的场景.  相似文献   

11.
基于Linux的HTTP协议实现方案及性能改进的研究   总被引:3,自引:0,他引:3  
郭辉  周敬利  余胜生 《计算机工程》2001,27(11):117-119
在Linux系统下实现如何通过HTTP协议对数据的传输是有待讨论的问题。文中从运用Linux套接安编程接口的方法阐述了HTTP协议的实现方案,给出了协议通信模型。同时许多研究表明,TCP并不能有效地支持Web服务器与客户机之间信息交换的事务处理类型。因此,文章从降低协议负担,优化访问效率的角度对Web的性能改进作了尝试性的探讨并给出了解决方案。  相似文献   

12.
Web QoS中的预测控制与比例延迟保证   总被引:1,自引:0,他引:1  
控制理论已被应用于Web服务器中,以改进其QoS性能。但当Web负载剧烈变化时,已有的基于反馈的比例延迟控制的实时性往往不佳。分析了HTTP1.1请求页面中嵌入URL的个数和嵌入文件大小的重尾特征,根据Web服务器的排队特性以及HTTP1.1持续连接的特点,通过不同优先级等待队列的长度来预测排队延迟,从而调整相应的服务线程配额,实现相对比例延迟保证。实验证明,这种方法取得了良好的效果,显著提高了系统的动态性能。  相似文献   

13.
The Web is increasingly used for critical applications and services. We present a client-transparent mechanism, called CoRAL, that provides high reliability and availability for Web service. CoRAL provides fault tolerance even for requests being processed at the time of server failure. The scheme does not require deterministic servers and can thus handle dynamic content. CoRAL actively replicates the TCP connection state while maintaining logs of HTTP requests and replies. In the event of a primary server failure, active client connections fail over to a spare, where their processing continues seamlessly. We describe key aspects of the design and implementation as well as several performance optimizations. Measurements of system overhead, failover performance, and preliminary validation using fault injection are presented.  相似文献   

14.
基于Linux嵌入式HTTP网络服务器的设计与实现   总被引:6,自引:0,他引:6  
刘殿敏  李科杰 《计算机工程》2004,30(23):193-195
介绍了基于PXA250和Linux嵌入式HTTP网络服务器的硬件、软件设计与实现。描述了基于PXA250嵌入式系统硬件设计原理,多进程和多线程并发连接嵌入式HTTP网络服务器软件算法和程序没计。程序设计基于HTTP,协议作为软件开发的基础,主要包括3个关键内容:一个标准HTML页的发送和接收,客户端向嵌入式HTTP网络服务器发送Web表单请求时的通信和CGI接口程序。探讨了多个线程共享数据资源,并且安全可靠地工作。用互斥锁和条件变量技术解决了由并发产生的同步问题。  相似文献   

15.
万军  周丽婕 《现代计算机》2007,(11):142-144
利用ARM开发平台,设计了基于嵌入式Web服务器的数据监控系统方案,实现了一个具体基于串口的数据监控系统.该系统选择Boa服务器作为HTTP服务器,通过串口采集数据实时显示在用户浏览器上,并且可动态修改数据由串口发送到外部设备.  相似文献   

16.
嵌入式Internet技术的研究与实现   总被引:5,自引:2,他引:3  
对用嵌入式Web服务器技术在嵌入式设备中实现HTTP服务时的底层支持协议TCP/IP协议栈的精简进行了研究,主要从对TCP/IP协议栈中选用的协议进行精简着手,在实验室环境下对精简后的HTTP协议模块、ICMP协议模块进行了测试,达到了预期目的,并给出了HTTP协议精简后的程序处理方法.  相似文献   

17.
《Computer Networks》1999,31(11-16):1725-1736
The World-Wide Web provides remote access to pages using its own naming scheme (URLs), transfer protocol (HTTP), and cache algorithms. Not only does using these special-purpose mechanisms have performance implications, but they make it impossible for standard Unix applications to access the Web. Gecko is a system that provides access to the Web via the NFS protocol. URLs are mapped to Unix file names, providing unmodified applications access to Web pages; pages are transferred from the Gecko server to the clients using NFS instead of HTTP, significantly improving performance; and NFS's cache consistency mechanism ensures that all clients have the same version of a page. Applications access pages as they would Unix files. A client-side proxy translates HTTP requests into file accesses, allowing existing Web applications to use Gecko. Experiments performed on our prototype show that Gecko is able to provide this additional functionality at a performance level that exceeds that of HTTP.  相似文献   

18.
HTTP服务器是Web服务器的一种,它是基于超文本传输协议HTTP的服务器。文章介绍了采用VC开发的多线程HTTP服务器的应用技术要点、工作原理及其模型设计,结合代码分析了HTTP服务器可以实现的功能,并进行了结果测试。  相似文献   

19.
基于主动测试的HTTP业务性能分析   总被引:2,自引:0,他引:2  
Web业务在当前Internet流量中占有重要地位,而网络路径对HTTP业务性能的影响直接决定了用户可察觉的Web服务器访问的响应时间和页面下载速率的差异.通过主动业务仿真方法,测量端到端路径HTTP业务的性能表现,分析了业务响应时间的构成与分布特征,发现对同一页面在不同时间访问的下载速率表现出高可变性,证明常用的基于均值一标准差的性能故障检测判别方法并不完全适用于该类性能参数,提出了基于盒式图的端到端路径HTTP业务性能故障检测算法.  相似文献   

20.
基于STM32微控制器设计了一种嵌入式远程监控系统服务器,可运用于数据中心远程监控系统中.嵌入式Web服务器是远程监控系统的核心组成部分,实现传感器数据采集,并向远程监控中心提供HTTP网络服务.针对通用网关接口(CGI)技术网页刷新闪烁问题,采用了AJAX动态网页技术,改善了监控数据显示效果.同时,该嵌入式Web服务器具有强大的功能可扩展性,可广泛应用于远程监控系统.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号