首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
提出一种基于Web的自适应内容传输管理方案,为不同网络客户提供不同优先级的性能服务,并且各种服务相互独立,同时能基于请求率和传输带宽共享系统资源。这种方案在传统的允入控制基础上改善了服务性能,根据网络负载情况和服务质量需求自适应选择传输内容,满足了客户对网络传输服务质量需求。该管理方案可以在三层模型的中间层实施,便于修改和维护,给出的实验结果显示了该方案的有效性。  相似文献   

2.
基于用户会话的Web服务器应用软件的QoS技术分析   总被引:1,自引:0,他引:1  
传统的Web服务器应用软件对客户的请求不加识别和区分,接收到一个请求便立即进行处理。因此,这种“一视同仁”的服务无法为高优先级的请求提供Web QOS保证。通过改进应用软件,Web服务器可以为不同的客户或请求提供Web QOS。其主要方法是将客户的HTTP请求进行分类,并且实现优先化调度、接纳控制、资源分配等机制。本文介绍了基于用户会话的Web服务器应用软件的QoS技术,并指出其中存在的缺陷。  相似文献   

3.
We describe the design of a system for fast and reliable HTTP service which we call Web++. Web++ achieves high reliability by dynamically replicating web data among multiple web servers. Web++ selects the available server that is expected to provide the fastest response time. Furthermore, Web++ guarantees data delivery given that at least one server containing the requested data is available. After detecting a server failure, Web++ client requests are satisfied transparently to the user by another server. Furthermore, the Web++ architecture is flexible enough for implementing additional performance optimizations. We describe implementation of one such optimization, batch resource transmission, whereby all resources embedded in an HTML page that are not cached by the client are sent to the client in a single response. Web++ is built on top of the standard HTTP protocol and does not require any changes either in existing web browsers or the installation of any software on the client side. In particular, Web++ clients are dynamically downloaded to web browsers as signed Java applets. We implemented a Web++ prototype; performance experiments indicate that the Web++ system with 3 servers improves the response time perceived by clients on average by 36.6%, and in many cases by as much as 59%, when compared with the current web performance. In addition, we show that batch resource transmission can improve the response time on average by 39% for clients with fast network connections and 21% for the clients with 56 Kb modem connections. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

4.
Replication of information across multiple servers is becoming a common approach to support popular Web sites. A distributed architecture with some mechanisms to assign client requests to Web servers is more scalable than any centralized or mirrored architecture. In this paper, we consider distributed systems in which the Authoritative Domain Name Server (ADNS) of the Web site takes the request dispatcher role by mapping the URL hostname into the IP address of a visible node, that is, a Web server or a Web cluster interface. This architecture can support local and geographical distribution of the Web servers. However, the ADNS controls only a very small fraction of the requests reaching the Web site because the address mapping is not requested for each client access. Indeed, to reduce Internet traffic, address resolution is cached at various name servers for a time-to-live (TTL) period. This opens an entirely new set of problems that traditional centralized schedulers of parallel/distributed systems do not have to face. The heterogeneity assumption on Web node capacity, which is much more likely in practice, increases the order of complexity of the request assignment problem and severely affects the applicability and performance of the existing load sharing algorithms. We propose new assignment strategies, namely adaptive TTL schemes, which tailor the TTL value for each address mapping instead of using a fixed value for all mapping requests. The adaptive TTL schemes are able to address both the nonuniformity of client requests and the heterogeneous capacity of Web server nodes. Extensive simulations show that the proposed algorithms are very effective in avoiding node overload, even for high levels of heterogeneity and limited ADNS control  相似文献   

5.
Web服务器的负荷状态检测技术   总被引:1,自引:0,他引:1  
Internet业务请求的汇聚使Web服务器具有高负荷性,极易导致过负荷,文中对其负荷状态的检测方法进行了深入研究.系统内部资源消耗量与业务处理量有直接关系,通过对Web服务器内部多种资源的并行检测可以准确评价其负荷状态。文中在对Web服务器的负荷特性进行分析的基础上,给出了其负荷状态的检测参数、检测模型及基于多个检测参数数据融合的检测方法.通过对监测到的实际系统数据的分析表明了此方法的有效性.  相似文献   

6.
With increasing richness in features such as personalization of content, Web applications are becoming increasingly complex and hence compute intensive. Traditional approaches for improving performance of static content Web sites have been based on the assumption that static content such as images are network intensive. However, these methods are not applicable to the dynamic content applications which are more compute intensive than static content. This paper proposes a suite of algorithms which jointly optimize the performance of dynamic content applications by reducing the client access times while also minimizing the resource utilization. A server migration algorithm allocates servers on-demand within a cluster such that the client access times are not affected even under sudden overload conditions. Further, a server selection mechanism enables statistical multiplexing of resources across clusters by redirecting requests away from overloaded clusters. We also propose a cluster decision algorithm which decides whether to migrate in additional servers at the local cluster or redirect requests remotely under different workload conditions. Through a combination of analytical modeling, trace-driven simulation over traces from large e-commerce sites and testbed implementation, we explore the performance savings achieved by the proposed algorithms.  相似文献   

7.
基于标记的缓存协作分布式Web服务器系统   总被引:3,自引:0,他引:3       下载免费PDF全文
林曼筠  钱华林 《软件学报》2003,14(1):117-123
介绍了提高Web服务器性能的前沿技术--分布式Web服务器系统,讨论了现有各种方案的优缺点,在此基础上提出一种新的分布式Web服务器系统.该系统使用基于标记的缓存协作用户请求分发方法(tag based cache cooperative Web requests distribution,简称TB-CCRD),通过前端机把系统中各个Web服务器的缓存组织成一个大的虚拟缓存系统,提高系统的整体缓存命中率,缩短了请求的响应时间;通过分布式处理TCP连接转交来消除前端机的性能瓶颈;利用标记通告URL在缓存中的位置,避免了额外的系统内部通信.从而得到了一个可扩展的高性能分布式Web服务器系统.  相似文献   

8.
Among the web application server resources, the most critical for their performance are those that are held exclusively by a service request for the duration of its execution (or some significant part of it). Such exclusively held server resources become performance bottleneck points, with failures to obtain such a resource constituting a major portion of request rejections under server overload conditions. In this paper, we propose a methodology that computes the optimal pool sizes for two such critical resources: web server threads and database connections. Our methodology uses information about incoming request flow and about fine‐grained server resource utilization by service requests of different types, obtained through offline and online request profiling. In our methodology, we advocate (and show its benefits) the use of a database connection pooling mechanism that caches database connections for the duration of a service request execution (so‐called request‐wide database connection caching). We evaluate our methodology by testing it on the TPC‐W web application. Our method is able to accurately compute the optimal number of server threads and database connections, and the value of sustainable request throughput computed by the method always lies within a 5% margin of the actual value determined experimentally. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

9.
网络QoS机制不足以提供完全的端到端的性能保证,在由三层结构构成的电子商务网站中,基于反馈控制理论,提出并实现了在应用服务器的数据库连接池中的绝对延迟保证,对数据库连接池作出了改进,确保带有高优先级请求的平均排队延迟不超过设定的阈值。通过系统辨识建立了数据库连接池的近似线性时不变模型(LTI),并设计了绝对延迟保证控制器,为Tomcat Web应用服务器实现了数据库连接池中闭环系统所有部件。测试结果表明,即使在并发请求数量剧烈变化时,控制器的设计也是有效的。  相似文献   

10.
A case study of Web server benchmarking using parallel WAN emulation   总被引:2,自引:0,他引:2  
Carey  Rob  Martin 《Performance Evaluation》2002,49(1-4):111-127
This paper describes the use of a parallel discrete-event network emulator called the Internet Protocol Traffic and Network Emulator (IP-TNE) for Web server benchmarking. The experiments in this paper demonstrate the feasibility of high-performance wide area network (WAN) emulation using parallel discrete-event simulation (PDES) techniques on a single shared-memory multiprocessor. Our experiments with an Apache Web server achieve up to 8000 HTTP/1.1 transactions/s for static document retrieval across emulated WAN topologies with up to 4096 concurrent Web/TCP clients. The results show that WAN characteristics, including round-trip delays, packet losses, and bandwidth asymmetry, all have significant impacts on Web server performance, as do client protocol behaviors. WAN emulation using the IP-TNE enables stress testing and benchmarking of Web servers in ways that may not be possible in simple local area network (LAN) test scenarios.  相似文献   

11.
基于事件驱动和QoS控制的Web服务器体系结构*   总被引:1,自引:1,他引:0  
Web服务器在满足大量且不断上升的用户需求方面扮演着关键角色,但Web服务器也面临高并发的用户请求、不同的QoS需求和访问控制等众多挑战。针对这些挑战提出了一种采用事件驱动和QoS控制的新型Web服务器体系结构,并依此开发出命名为Epdds的原型服务器。通过对Epdds服务器的性能分析及与其他Web服务器性能对比,进一步验证了此种Web服务器体系结构的可行性和有效性。  相似文献   

12.
Banga  Gaurav  Druschel  Peter 《World Wide Web》1999,2(1-2):69-83
The World Wide Web and its related applications place substantial performance demands on network servers. The ability to measure the effect of these demands is important for tuning and optimizing the various software components that make up a Web server. To measure these effects, it is necessary to generate realistic HTTP client requests in a test‐bed environment. Unfortunately, the state‐of‐the‐art approach for benchmarking Web servers is unable to generate client request rates that exceed the capacity of the server being tested, even for short periods of time. Moreover, it fails to model important characteristics of the wide area networks on which most servers are deployed (e.g., delay and packet loss). This paper examines pitfalls that one encounters when measuring Web server capacity using a synthetic workload. We propose and evaluate a new method for Web traffic generation that can generate bursty traffic, with peak loads that exceed the capacity of the server. Our method also models the delay and loss characteristics of WANs. We use the proposed method to measure the performance of widely used Web servers. The results show that actual server performance can be significantly lower than indicated by standard benchmarks under conditions of overload and in the presence of wide area network delays and packet losses. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

13.
Adapting multimedia Internet content for universal access   总被引:2,自引:0,他引:2  
Content delivery over the Internet needs to address both the multimedia nature of the content and the capabilities of the diverse client platforms the content is being delivered to. We present a system that adapts multimedia Web documents to optimally match the capabilities of the client device requesting it. This system has two key components. 1) A representation scheme called the InfoPyramid that provides a multimodal, multiresolution representation hierarchy for multimedia. 2) A customizer that selects the best content representation to meet the client capabilities while delivering the most value. We model the selection process as a resource allocation problem in a generalized rate distortion framework. In this framework, we address the issue of both multiple media types in a Web document and multiple resource types at the client. We extend this framework to allow prioritization on the content items in a Web document. We illustrate our content adaptation technique with a web server that adapts multimedia news stories to clients as diverse as workstations, PDA's and cellular phones  相似文献   

14.
Multimedia streaming allows consumers to view multimedia content anywhere. However, quality of service is a major concern amid heightened levels of network traffic caused by increasing user demand. Accordingly, media streaming technology is adopting a new paradigm: adaptive HTTP streaming (AHS). AHS is widely used for real-time streaming content delivery in the Internet environment. In streaming, selection of appropriate bitrate is crucial for adapting media rate to network variations and client processing capabilities while ensuring optimal service for the consumer. We evaluate a proposed client-driven three-level optimized rate adaptation algorithm for adaptive HTTP media streaming. In the first stage, the algorithm chooses a suitable starting bitrate close to the available channel capacity. Next, the algorithm monitors the client parameters in real time, precisely detecting network variations and choosing a likely available bit representation for the next download segment. The algorithm controls and minimizes the effects of buffer stalls and overflow resulting from the brief network variations occurring between consecutive segments. The proposed algorithm is implemented in Dynamic Adaptive Streaming over HTTP (DASH) player and its performance compared to that of commercially available Gstreamer-based HTTP Live Streaming (HLS) and DASH players which use conventional segment fetch time–based adaptation and throughput-based adaptation algorithms respectively. This evaluation uses a real-time cloud server client and test bed streaming setup. The resulting analysis shows that the client-driven three-level rate adaptation (TLRA) approach allows adaptive streaming clients to maximize use of end-to-end network capacity, delivering an ideal user experience by precisely predicting network variations and rapidly adapting to available bandwidth, minimizing rebuffering events and bitrate level changes.  相似文献   

15.
在一种新的Web集群体系结构的基础上,提出了一种资源优化的双最小均衡区分服务调度算法:首先在前端调度器按资源均衡度将Web请求分配到各后台服务器.然后将Web请求的优先级与资源均衡度两个特征参数结合起来,综合设计后台服务器的Web请求调度顺序,为了评估该算法的性能,进行了大量的模拟实验.在与其他著名调度策略如分离式调度的对比结果显示:双最小均衡调度算法使Web请求的效率提高了11%,同时很好地实现了区分服务.证实了资源优化调度策略具有一定的普遍意义.  相似文献   

16.
陈梅梅 《计算机科学》2016,43(8):199-203, 222
请求调度通常需要在充分利用现有服务器资源的基础上满足响应时间最小化和系统吞吐量最大化的目标,但对于以盈利为目的的电子商务网站来说,关键还是要提高交易请求和VIP用户发起请求的达成率。针对电子商务网站请求调度的多重目标,首先提出了收益驱动的请求分类多维标准,在此基础上定义了请求优先级和调度优先级的概念,给出了基于请求分类的多目标动态优先调度算法MODP,并引入了基于事前过载判断而非负载测量的调度机制以避免控制延迟,有利于电子商务网站在多变的负载条件下自适应地实现差别服务和QoS保障。仿真实验证明了MODP机制与算法的有效性,将其与传统FCFS调度方法进行对比研究,结果表明:服务器无论在高载还是低载情况下,MODP调度策略在实现收益最大化、平均响应时间最小化的目标方面都具有明显的优势。  相似文献   

17.
The protocols used by the majority of Web transactions are HTTP/1.0 and HTTP/1.1. HTTP/1.0 is typically used with multiple concurrent connections between client and server during the process of Web page retrieval. This approach is inefficient because of the overhead of setting up and tearing down many TCP connections and because of the load imposed on servers and routers. HTTP/1.1 attempts to solve these problems through the use of persistent connections and pipelined requests, but there is inconsistent support for persistent connections, particularly with pipelining, from Web servers, user agents, and intermediaries. In addition, the use of persistent connections in HTTP/1.1 creates the problem of non-deterministic connection duration. Web browsers continue to open multiple concurrent TCP connections to the same server. This paper examines the idea of packaging the set of objects embedded on a Web page into a single bundle object for retrieval by clients. Based on measurements from popular Web sites and an implementation of the bundle mechanism, we show that if embedded objects on a Web page are delivered to clients as a single bundle, the response time experienced by clients is better than that provided by currently deployed mechanisms. Our results indicate that the use of bundles provides shorter overall download times and reduced average object delays as compared to HTTP/1.0 and HTTP/1.1. This approach also reduces the load on the network and servers. Implementation of the mechanism requires no changes to the HTTP protocol.  相似文献   

18.
郭光  张严心 《计算机应用》2014,34(4):973-976
Apache Web服务器一般采用单变量模型,处理多优先级延迟保证时需多次建模,可扩展性不佳。为此提出一种Web服务器多输入多输出(MIMO)模型,并结合分散控制理论和自校正控制(STC)理论设计分散自校正控制器。该控制器动态调节处理不同优先级请求的工作线程数目,能保证较高优先级请求更快得到处理且维持不同优先级请求的平均延迟比为设定值,模型和控制器参数根据在线辨识结果实时更新。仿真表明,过载情况下即使并发客户连接数目急剧变化,闭环系统中的服务器仍能维持较好的比例延迟保证。  相似文献   

19.
Contemporary Web sites typically consist of front–end Web servers, application servers, and back-end information systems such as database servers. There has been limited research on how to provide overload control and service differentiation for the back-end systems. In this paper we propose an architecture called tiered service (TS) for these purposes. In TS, there are several heterogeneous back-end systems to serve the Web applications. The Web applications communicate with a routing intermediary to intelligently route the queries to the appropriate back-end servers based on various policies such as client profiles and server load. In our system the back ends may store different qualities of data; lower quality data typically requires less overhead to serve. The main contributions of this paper include (i) a tiered content replication scheme that replicates tiered qualities of data on heterogeneous back ends with different capacity to satisfy clients with diverse requirements for latency and quality of data, and (ii) an application-transparent query routing architecture that automatically routes the queries to the appropriate back ends. The architecture was implemented in our test bed, and its performance was benchmarked. The experimental results demonstrate that TS offers significant performance improvement.  相似文献   

20.
设计并实现了一个基于透明计算模式的I/O Server系统,I/O Server和I/O Client是一个在透明计算环境下,支持多操作系统远程启动和运行的网络存储访问服务I/O Manager的2个软件模块,I/O Server工作在服务器端,I/O Client工作在客户端。在透明计算模式中,各客户机硬件与操作系统分离,用户需要的操作系统的应用程序存储在服务器端。在客户机启动时,I/O Server和启动协议将I/O Client下载到端系统上运行,然后I/O Client向I/O Server发出I/O请求,I/O Server对收到的I/O请求加以分析,进行优先级分类,在优先级分时轮转调度I/O请求、操作服务器上的虚拟硬盘文件,并通过预取和缓存策略减少磁盘I/O操作,将处理结果返回给客户端,支持操作系统的远程启动,并为系统运行时的各种请求提供服务。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号