首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Meeting client Quality-of-Service (QoS) expectations proves to be a difficult task for the providers of e-Commerce services, especially when web servers experience overload conditions, which cause increased response times and request rejections, leading to user frustration, lowered usage of the service and reduced revenues. In this paper, we propose a server-side request scheduling mechanism that addresses these problems. Our Reward-Driven Request Prioritization (RDRP) algorithm gives higher execution priority to client web sessions that are likely to bring more service profit (or any other application-specific reward). The method works by predicting future session structure by comparing its requests seen so far with aggregated information about recent client behavior, and using these predictions to preferentially allocate web server resources. Our experiments using the TPC-W benchmark application with an implementation of the RDRP techniques in the JBoss web application server show that RDRP can significantly boost profit attained by the service, while providing better QoS to clients that bring more profit.  相似文献   

2.
ADO.Net连接池中非正常断开连接的异常控制   总被引:1,自引:0,他引:1  
动态的Web站点常常要从数据库中获得必要的数据来生成Web页面,因此Web应用程序与数据库之间将耗费巨大的开销来创建数据库连接,ADO.Net采用连接池来减少创建数据库连接的开销。但是,ADO.Net连接池中经常会出现非正常断开的连接,池管理程序会将这些实际无效的连接分配给请求的应用程序使用,应用程序在使用这些连接执行SQL语句时会发生连接异常,提出了避免在SQL Server,Net数据提供程序中出现这种连接异常的解决方案。  相似文献   

3.
为提高嵌入式Web服务器的处理能力,克服其计算和存储资源有限的缺点,增强容错性,在分析了嵌入式Web服务器现状及发展趋势的基础上,结合传统Web服务器负载均衡技术,探讨了基于嵌入式Web服务器的负载均衡技术,设计了适合于嵌入式Web服务器的基于访问请求优先级的最少连接数算法,并给出了工作流程,实验表明系统达到了对紧急请求优先、快速、准确执行的目标.  相似文献   

4.
随着Web应用在商业领域的广泛使用,Web服务器系统需要在高负载下提供区分服务,以满足用户的不同需求。为实现以延迟作为评价指标的区分服务,本文在Web服务器系统的连接管理和请求处理两个层次建立了基于反馈控制的比例延迟保证模型。模型中的反馈控制器通过动态计算和调节不同类别客户占用的资源:(服务线程和数据库连接),能保证高优先级的客户较快得到服务而不同类别客户的平均延迟比保持不变。为测试闭环系统的性能,设计了两种分别服从均匀分布和重尾分布的动态负载。仿真结果表明,即使并发客户连接的数目剧烈变化,控制器作用下的服务器系统仍然能够达到较好的比例延迟保证,可靠地为用户提供区分服务。  相似文献   

5.
We propose a service replication framework for unreliable networks. The service exhibits the same consistency guarantees about the order of execution of operation requests as its non‐replicated implementation. Such guarantees are preserved in spite of server replica failure or network failure (either between server replicas or between a client and a server replica), and irrespective of when the failure occurs. Moreover, the service guarantees that in the case when a client sends an ‘update’ request multiple times, there is no risk that the request be executed multiple times. No hypotheses about the timing retransmission policy of clients are made, e.g. the very same request might even arrive at different server replicas simultaneously. All of these features make the proposed framework particularly suitable for interaction between remote programs, a scenario that is gaining increasing importance. We discuss a prototype implementation of our replication framework based on Tomcat, a very popular Java‐based Web server. The prototype comes into two flavors: replication of HTTP client session data and replication of a counter accessed as a Web service. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

6.
嵌入式Web服务器端脚本引擎设计与实现   总被引:1,自引:0,他引:1  
由于嵌入式设备的资源有限,RAM和ROM都非常小.为了在资源受限的情况下实现Web服务功能,在分析HTTP1.1的基础上,设计了一种简化的嵌入式Web服务器的软件结构.在参照MicrosoftASP技术的基础上,定义了实现嵌入式Web服务器端脚本解析的主要函数和接口,并给出了脚本引擎调用方法.结合C语言编程,成功地实现了HTTP连接和脚本解析.  相似文献   

7.
设计并实现了一种基于客户地址的QoS Web服务器,它能对客户的服务请求设定不同的服务等级,并按不同的服务策略响应客户的服务请求。实验结果表明,QoS Web服务器能够有效地实现了期望的QoS服务。  相似文献   

8.
Web服务器性能评测   总被引:11,自引:0,他引:11  
Web服务器性能评测是一种理解Web服务器对不同负载反应能力的方法,它对Web服务器的容量规划和性能增强有很大的帮助。讨论了Web服务器性能评测的原理、方法、难点及解决方案,介绍了基于Web负载的特点、ON/OFF源模型及浏览器/服务器体系结构,开发了一个Web服务器性能评测工具-WSBench。WSBench产生渐近自相似的HTTP请求序列,从静态文档、动态文档(没有数据库存取)、动态文档(有数据库存取)及前三者根据Zipf法则的组合4个层次来评测Web服务器的性能。性能测试结果表现为每秒请求数、每秒字节数和往返时间3个指标。最后讨论了Web服务器性能问题及使用WSBench测得的指标来建议Web服务器性能增强可以采用的方法。  相似文献   

9.
基于服务器推送技术和XMPP的WebIM系统实现   总被引:1,自引:0,他引:1  
为解决现有Web IM系统中大量用到AJAX周期请求而产生的系统消息延迟,以及客户端和服务器端通信量大的问题,提出了以服务器推送技术作为基础的Web IM系统,并选择XMPP作为服务器和客户端的通信协议.在比较AJAX和Comet 技术原理的基础上,给出了Comet技术实现HTTP长连接的策略.通过一个原型WebIM系统,验证了Comet技术在Web IM 系统中使用的确可以有效地解决大量AJAX周期请求的问题.  相似文献   

10.
Cluster‐based solutions are being widely adopted for implementing flexible, scalable, low‐cost and high‐performance web server platforms. One of the main difficulties to implement these platforms is the correct dimensioning of the cluster size, so as to satisfy variable and peak demand periods. In this context, virtualization is being adopted by many organizations as a solution not only to provide service elasticity, but also to consolidate server workloads, and improve server utilization rates. A virtualized web server can be dynamically adapted to the client demands by deploying new virtual nodes when the demand increases, and powering off and consolidating virtual nodes during periods of low demand. Furthermore, the resources from the in‐house infrastructure can be complemented with a cloud provider (cloud bursting), so that peak demand periods can be satisfied by deploying cluster nodes in the external cloud, on an on‐demand basis. In this paper, we analyze the scalability of hybrid virtual infrastructures for two different distributed web server cluster implementations: a simple web cluster serving static files and a multi‐tier web server platform running the CloudStone benchmark. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

11.
严格服务等级约定的区分Web服务技术研究   总被引:1,自引:0,他引:1  
随着Internet的普及和Web应用的发展,满足客户不同业务需求成为人们关注的问题.该文基于业务类的访问资源需求和服务等级给出了一种资源划分策略,用来确定业务类在访问率波动较小条件下达到服务级别所需的资源;提出一种资源效用控制机制,用来适应访问的突发性和保证业务类服务等级要求.通过按周期进行资源划分,同时利用资源控制机制来适应周期内的访问突发,得到了严格服务等级约定的区分Web服务(简称SLADS).文章通过建立原型与其它方法进行了实验比较,结果表明SLADS方法能够有效支持业务类的严格服务等级和提高资源利用率.  相似文献   

12.
《Computer Networks》2008,52(7):1390-1409
Overload control mechanisms such as admission control and connection differentiation have proven effective for preventing overload of application servers running secure web applications. However, achieving optimal results in overload prevention is only possible when some kind of resource management is considered in addition to these mechanisms.In this paper we propose an overload control strategy for secure web applications that brings together dynamic provisioning of platform resources and admission control based on secure socket layer (SSL) connection differentiation. Dynamic provisioning enables additional resources to be allocated to an application on demand to handle workload increases, while the admission control mechanism avoids the server’s performance degradation by dynamically limiting the number of new SSL connections accepted and preferentially serving resumed SSL connections (to maximize performance on session-based environments) while additional resources are being provisioned.Our evaluation demonstrates the benefit of our proposal for efficiently managing the resources and preventing server overload on a 4-way multiprocessor Linux hosting platform, especially when the hosting platform is fully overloaded.  相似文献   

13.
嵌入式Web服务器的设计及其CGI实现   总被引:13,自引:0,他引:13  
由于嵌入式设备的资源有限,RAM和ROM都非常小,在资源受限的情况下实现Web服务功能是非常困难的.在分析HTTP1.1的基础上,讨论了嵌入式Web服务器的特点及设计思想,提出了一种简化的嵌入式Web服务器的软件结构.重点描述了嵌入式Web服务器中CGI的实现方法.最后,结合C语言编程成功地实现了HTTP连接.  相似文献   

14.
A number of technology and workload trends motivate us to consider the appropriate resource allocation mechanisms and policies for streaming media services in shared cluster environments. We present MediaGuard – a model-based infrastructure for building streaming media services – that can efficiently determine the fraction of server resources required to support a particular client request over its expected lifetime. The proposed solution is based on a unified cost function that uses a single value to reflect overall resource requirements such as the CPU, disk, memory, and bandwidth necessary to support a particular media stream based on its bit rate and whether it is likely to be served from memory or disk. We design a novel, time-segment-based memory model of a media server to efficiently determine in linear time whether a request will incur memory or disk access when given the history of previous accesses and the behavior of the server's main memory file buffer cache. Using the MediaGuard framework, we design two media services: (1) an efficient and accurate admission control service for streaming media servers that accounts for the impact of the server's main memory file buffer cache, and (2) a shared streaming media hosting service that can efficiently allocate the predefined shares of server resources to the hosted media services, while providing performance isolation and QoS guarantees among the hosted services. Our evaluation shows that, relative to a pessimistic admission control policy that assumes that all content must be served from disk, MediaGuard (as well as services that are built using it) deliver a factor of two improvement in server throughput.  相似文献   

15.
DPSP:一种基于内容的客户请求调度策略   总被引:1,自引:0,他引:1  
改善执行服务器的性能对于提高Web服务器的服务质量至关重要,在分析执行服务器的线程数量,调度策略,服务器响应时间和客户请求到达率间相互关系的基础上,提出了动态优先级调策略DPSP(dynamic priority scheduling policy),它根据客户请求内容的重要程序有效地组织服务器的请求执行顺序,达到提高请求服务质量的目的,与先来先服务FCFS(first come first service),短请求优先SRF(short request first)和DEDF(dynamic earliest deadline first)策略进行模拟比较表明,DPSP策略以较小的响应延迟增量为代价,提高了WWW的服务质量。  相似文献   

16.
在电子商务、工业制造领域中,具有海量存储与高I/O吞吐能力的数据存储服务器得到了广泛应用。数据存储服务器提供QoS支持对应用需求而言很重要,但是实践中却很少有数据存储服务器提供了QoS支持。提出了一种支持服务质量区分并对高优先级用户提供服务质量保证的存储服务区服务资源分配算法,由于对高优先级用户请求与低优先级用户请求采用了可变服务质量区分因子,因而算法在提供服务优先级区分与确保高优先级请求服务质量的同时,减小了低优先级请求的请求丢弃率,同时最大化了服务资源利用率。  相似文献   

17.
Mobile devices with their more and more powerful resources allow the development of mobile information systems in which services are not only provided by traditional systems but also autonomously executed and controlled in the mobile devices themselves. Services distributed on autonomous mobile devices allow both the development of cooperative applications without a back‐end infrastructure and the development of applications blending distributed and centralized services. In this paper, we propose MicroMAIS: an integrated platform for supporting the execution of Web service‐based applications natively on a mobile device. The MicroMAIS platform is composed of mAS and μ‐BPEL. The former allows the execution of a single Web service, whereas the latter permits the orchestration of several Web services according to the WS‐BPEL standard. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

18.
传统模式的Web系统通常以客户端发出请求、服务器响应的方式工作。文中针对IP多播技术进行详细深入的分析和研究,给出了Web服务器端IP地址的自动分配、域名的自动获取以及域名冲突解决方法,并将相关服务以IP数据包的形式主动发送给客户端,从而实现了Web服务的主动推送。最后,基于嵌入式Linux系统对其有效性进行验证。  相似文献   

19.
在真实网络环境中, 存在多种瓶颈资源, 如存储能力、处理请求能力和带宽等。传统的负载均衡方法只考虑了一种瓶颈资源, 忽视了其他瓶颈资源对网络性能的影响。针对该问题, 建立相似度概念模型, 设计用于虚拟服务器转移的转移代价函数, 提出可用于任意结构化P2P网络的多种资源负载均衡(MRLB)算法。仿真结果显示, 节点上任一资源的负载都随着节点能力的提升而相应增加。MRLB算法能够有效解决传统方法的缺点, 实现多种瓶颈资源的负载均衡。  相似文献   

20.

In recent years, various studies on OpenStack-based high-performance computing have been conducted. OpenStack combines off-the-shelf physical computing devices and creates a resource pool of logical computing. The configuration of the logical computing resource pool provides computing infrastructure according to the user’s request and can be applied to the infrastructure as a service (laaS), which is a cloud computing service model. The OpenStack-based cloud computing can provide various computing services for users using a virtual machine (VM). However, intensive computing service requests from a large number of users during large-scale computing jobs may delay the job execution. Moreover, idle VM resources may occur and computing resources are wasted if users do not employ the cloud computing resources. To resolve the computing job delay and waste of computing resources, a variety of studies are required including computing task allocation, job scheduling, utilization of idle VM resource, and improvements in overall job’s execution speed according to the increase in computing service requests. Thus, this paper proposes an efficient job management of computing service (EJM-CS) by which idle VM resources are utilized in OpenStack and user’s computing services are processed in a distributed manner. EJM-CS logically integrates idle VM resources, which have different performances, for computing services. EJM-CS improves resource wastes by utilizing idle VM resources. EJM-CS takes multiple computing services rather than single computing service into consideration. EJM-CS determines the job execution order considering workloads and waiting time according to job priority of computing service requester and computing service type, thereby providing improved performance of overall job execution when computing service requests increase.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号