首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We have implemented an efficient and scalable web cluster named LVS-CAD/FC (i.e. LVS with Content-Aware Dispatching and File Caching). In LVS-CAD/FC, a kernel-level one-way content-aware web switch based on TCP Rebuilding is implemented to examine and distribute the HTTP requests from clients to web servers, and the fast Multiple TCP Rebuilding is implemented to efficiently support persistent connection. Besides, a file-based web cache stores a small set of the most frequently accessed web files in server RAM to reduce disk I/Os and a light-weight redirect method is developed to efficiently redirect requests to this cache. In this paper, we have further proposed new policies related to content-based workload-aware request distribution, in which the web switch considers the content of requests and workload characterization in request dispatching. In particular, web files with more access frequencies would be duplicated in more servers’ file-based caches, such that hot web files can be served by more servers. Our goals are to improve cluster performance by obtaining better memory utilization and increasing the cache hit rates while achieving load balancing among servers. Experimental results of practical implementation on Linux show that LVS-CAD/FC is efficient and scales well. Besides, LVS-CAD/FC with the proposed policies can achieve 66.89% better performance than the Linux Virtual Server with a content-blind web switch.  相似文献   

2.
基于分布式请求处理的Web虚拟服务器   总被引:1,自引:0,他引:1  
设计和实现了一种分布式高性能的、基于内容负载分配的Web虚拟服务器。其特点有:前端只负责与客户进行TCP握手,后端处理后续报文并直接选路应答客户;确定一个新的传输过程的方法是由前端在请求的第一个TCP报文中加入MSS选项,很好地支持了持久连接;独立于操作系统的TCP实现。实验表明,该系统具有很高的吞吐量和很好的可扩展性。  相似文献   

3.
Replication of information across a server cluster provides a promising way to support popular Web sites. However, a Web‐server cluster requires some mechanism for the scheduling of requests to the most available server. One common approach is to use the cluster Domain Name System (DNS) as a centralized dispatcher. The main problem is that WWW address caching mechanisms (although reducing network traffic) only let this DNS dispatcher control a very small fraction of the requests reaching the Web‐server cluster. The non‐uniformity of the load from different client domains, and the high variability of real Web workload introduce additional degrees of complexity to the load balancing issue. These characteristics make existing scheduling algorithms for traditional distributed systems not applicable to control the load of Web‐server clusters and motivate the research on entirely new DNS policies that require some system state information. We analyze various DNS dispatching policies under realistic situations where state information needs to be estimated with low computation and communication overhead so as to be applicable to a Web cluster architecture. In a model of realistic scenarios for the Web cluster, a large set of simulation experiments shows that, by incorporating the proposed state estimators into the dispatching policies, the effectiveness of the DNS scheduling algorithms can improve substantially, in particular if compared to the results of DNS algorithms not using adequate state information. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

4.
《Computer Networks》1999,31(11-16):1563-1577
This paper presents a study of Web content adaptation to improve server overload performance, as well as an implementation of a Web content adaptation software prototype. When the request rate on a Web server increases beyond server capacity, the server becomes overloaded and unresponsive. The TCP listen queue of the server's socket overflows exhibiting a drop-tail behavior. As a result, clients experience service outages. Since clients typically issue multiple requests over the duration of a session with the server, and since requests are dropped indiscriminately, all clients connecting to the server at overload are likely to experience connection failures, even though there may be enough capacity on the server to deliver all responses properly for a subset of clients. In this paper, we propose to resolve the overload problem by adapting delivered content to load conditions to alleviate overload. The premise is that successful delivery of a less resource intensive content under overload is more desirable to clients than connection rejection or failures.The paper suggests the feasibility of content adaptation from three different viewpoints; (a) potential for automating content adaptation with minimal involvement of the content provider, (b) ability to achieve sufficient savings in resource requirements by adapting present-day Web content while preserving adequate information, and (c) feasibility to apply content adaptation technology on the Web with no modification to existing Web servers, browsers or the HTTP protocol.  相似文献   

5.
周颖  赵岳松 《计算机工程》2003,29(16):172-174
对相关Web分配器技术进行分析,并提出一个新颖的构架来改进在服务器和缓存集群中Web请求路由,MPLS方案利用应用层信息到第二层标签以提高复杂的请求路由功能,而不会发生TCP连接终止瓶颈。需要客户端代理服务器参与为客户请求申请到合适的标签,可用off-the-shelf MPLS交换去执行分配。允许分配器执行一些关健功能实现可伸缩性。  相似文献   

6.
随着互联网迅猛发展,研究Web请求的调度分配技术已成为国内外研究的热点。针对现有Web调度器存在着可扩展性差、系统开销大和无通用性诸多缺陷,设计和实现了一种新 的、通用的、可扩展性好的基于内容的通用调度器Cuttle。该系统在内核TCP/IP层实现,采用伪服务器、捎带技术、截获、伪装三次握手、Mix-LARD调度策略等技术。实验表明,Cuttle的可扩展性好,响应延迟小。  相似文献   

7.
熊智  晏蒲柳  郭成城 《计算机工程》2006,32(17):35-37,4
为了让Web集群服务器支持QoS,在分配器上实现了一些QoS的机制,包括区分服务、性能隔离、服务器动态划分、接纳控制和内容自适应。对于高优先级的请求,系统确保其服务质量满足事先商定的服务质量协议;对于低优先级的请求,系统提供尽力而为的服务。尤其,当服务器重载时,分配器不是简单地靠丢弃请求,而是采用内容自适应机制来防止服务器过载。实际测试表明,系统达到了所有的设计要求。  相似文献   

8.
Replication of information across multiple servers is becoming a common approach to support popular Web sites. A distributed architecture with some mechanisms to assign client requests to Web servers is more scalable than any centralized or mirrored architecture. In this paper, we consider distributed systems in which the Authoritative Domain Name Server (ADNS) of the Web site takes the request dispatcher role by mapping the URL hostname into the IP address of a visible node, that is, a Web server or a Web cluster interface. This architecture can support local and geographical distribution of the Web servers. However, the ADNS controls only a very small fraction of the requests reaching the Web site because the address mapping is not requested for each client access. Indeed, to reduce Internet traffic, address resolution is cached at various name servers for a time-to-live (TTL) period. This opens an entirely new set of problems that traditional centralized schedulers of parallel/distributed systems do not have to face. The heterogeneity assumption on Web node capacity, which is much more likely in practice, increases the order of complexity of the request assignment problem and severely affects the applicability and performance of the existing load sharing algorithms. We propose new assignment strategies, namely adaptive TTL schemes, which tailor the TTL value for each address mapping instead of using a fixed value for all mapping requests. The adaptive TTL schemes are able to address both the nonuniformity of client requests and the heterogeneous capacity of Web server nodes. Extensive simulations show that the proposed algorithms are very effective in avoiding node overload, even for high levels of heterogeneity and limited ADNS control  相似文献   

9.
现有的基于内容的Web交换技术不能很好地支持HTTP持久连接(P-HTTP)和流水线请求。提出了一种TCP连接的延迟多次迁移方法(DM-TCPHA),前端FE根据第一个请求将连接迁移到选中的BE,该连接上的后续请求报文由BE来调度,且通过强制手段保证BE在收到客户对HTTP回复的确认报文后再执行迁移。该方法具有TCP迁移高性能的优点,且能很好地支持P-HTTP和流水线请求。  相似文献   

10.
With the increasingly growing amount of service requests from the world‐wide customers, the cloud systems are capable of providing services while meeting the customers' satisfaction. Recently, to achieve the better reliability and performance, the cloud systems have been largely depending on the geographically distributed data centers. Nevertheless, the dollar cost of service placement by service providers (SP) differ from the multiple regions. Accordingly, it is crucial to design a request dispatching and resource allocation algorithm to maximize net profit. The existing algorithms are either built upon energy‐efficient schemes alone, or multi‐type requests and customer satisfaction oblivious. They cannot be applied to multi‐type requests and customer satisfaction‐aware algorithm design with the objective of maximizing net profit. This paper proposes an ant‐colony optimization‐based algorithm for maximizing SP's net profit (AMP) on geographically distributed data centers with the consideration of customer satisfaction. First, using model of customer satisfaction, we formulate the utility (or net profit) maximization issue as an optimization problem under the constraints of customer satisfaction and data centers. Second, we analyze the complexity of the optimal requests dispatchment problem and rigidly prove that it is an NP‐complete problem. Third, to evaluate the proposed algorithm, we have conducted the comprehensive simulation and compared with the other state‐of‐the‐art algorithms. Also, we extend our work to consider the data center's power usage effectiveness. It has been shown that AMP maximizes SP net profit by dispatching service requests to the proper data centers and generating the appropriate amount of virtual machines to meet customer satisfaction. Moreover, we also demonstrate the effectiveness of our approach when it accommodates the impacts of dynamically arrived heavy workload, various evaporation rate and consideration of power usage effectiveness. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
Case-based reasoning (CBR) is an effective and fast problem-solving methodology, which solves new problems by remembering and adaptation of past cases. With the increasing requests for useful references for all kinds of problems and from different locations, keeping a single CBR system seems to be outdated and not practical. Multi-CBR agents located in different places are of great support to fast meet these requests. In this paper, the architecture of a multi-CBR agent system is proposed, where the CBR agents locate at different places, and are assumed to have the same ability to deal with new problem independently. When the requests in a request queue from different places are coming one by one, we propose a new policy of dispatching which agent to satisfy the request queue. Throughout the paper, we assume that the system must solve the coming request by considering only past requests. In this context, the performance of traditional greedy algorithms is not satisfactory. We apply a new but simple approach – competitive algorithm for on-line problem (called On-line multi-CBR agent dispatching algorithm) to determine the dispatching policy to keep comparative low cost. The corresponding on-line dispatching algorithm is proposed and the competitive ratio is given. Based on the competitive algorithm, the dispatching of multi-CBR agents is optimized.  相似文献   

12.
Modern Web-based application infrastructures are based on clustered multitiered architectures, where request distribution occurs in two sequential stages: over a cluster of Web servers and over a cluster of application servers. Much work has focused on strategies for distributing requests across a Web server cluster in order to improve the overall throughput across the cluster. The strategies applied at the application layer are the same as those at the Web server layer because it is assumed that they transfer directly. In this paper, we argue that the problem of distributing requests across an application server cluster is fundamentally different from the Web server request distribution problem due to core differences in request processing in Web and application servers. We devise an approach for distributing requests across a cluster of application servers such that the overall system throughput is enhanced, and load across the application servers is balanced.  相似文献   

13.
The current Web service model treats all requests equivalently, both while being processed by servers and while being transmitted over the network. For some uses, such as multiple priority schemes, different levels of service are desirable. We propose application-level TCP connection management mechanisms for Web servers to provide two different levels of Web service, high and low service, by setting different time-outs for inactive TCP connections. We evaluated the performance of the mechanism under heavy and light loading conditions on the Web server. Our experiments show that, though heavy traffic saturates the network, high level class performance is improved by as much as 25–28%. Therefore, this mechanism can effectively provide QoS guaranteed services even in the absence of operating system and network support.  相似文献   

14.
Case-based reasoning (CBR) is an effective and fast problem-solving methodology, which solves new problems by remembering and adaptation of past cases. With the increasing requests for useful references for all kinds of problems and from different locations, keeping a single CBR system seems to be outdated and not practical. Multi-CBR agents located in different places are of great support to fast meet these requests. In this paper, the architecture of a multi-CBR agent system is proposed, where each CBR agent locates at different places, and is assumed to have the same ability to deal with new problem independently. When requests in a request queue are coming one by one from different places, we propose a new policy of agent dispatching to satisfy the request queue. Throughout the paper, we assume that the system must solve the coming request by considering only past requests. In this context, the performance of traditional greedy algorithms is not satisfactory. We apply a new but simple approach – competitive algorithm for on-line problem (called ODAL) to determine the dispatching policy to keep comparative low cost. The corresponding on-line dispatching algorithm is proposed and the competitive ratio is given. Based on the competitive algorithm, the dispatching of multi-CBR agents is optimized.  相似文献   

15.
Web请求分配和选择的综合方案与性能分析   总被引:26,自引:0,他引:26  
单志广  戴琼海  林闯  杨扬 《软件学报》2001,12(3):355-366
Internet的服务模式正由传统的通信与信息浏览向电子交易与服务转变,这就要求WWW服务器既支持电子商务类具有优先级的请求,同时也要维护各类Web应用的公平性.以实现系统负载均衡和满足不同请求的WebQoS需求及公平性为目标,讨论并提出了并行WWW服务器集群系统HTTP请求分配和选择的综合方案,并提供了这些方案的随机高级Petri网模型.为解决模型状态空间爆炸问题,还提出了一种可以显著简化模型求解复杂性的近似性能分析技术;给出了综合方案的数值分析结果和性能评价,建议了适合电子商务类应用的、实现高性能集群  相似文献   

16.
为分发静态请求提出一种自适应的基于文档大小的调度算法ADSB,它使用资源占用时间来衡量负载,并根据被请求文档的大小来分发请求,均衡各个后台服务器的负载;ADSB根据负载历史周期地预测即将到来的负载统计特性,并根据预测结果来调整算法的参数;由于有着目标位置特性,ADSB能获得很高的缓存命中率;因为实际静态文档的大小服从重尾分布,所以分发大小不同的文档到不同的服务器,使ADSB减小了小文档的平均响应时间,同时大文档也没受到明显的影响。实验表明,ADSB的性能优于已有的经典的调度算法。  相似文献   

17.
一种基于OSI应用层的Web群集负载平衡调序策略研究   总被引:1,自引:0,他引:1       下载免费PDF全文
本文总结了目前基于L7的Web群集负载平衡调度研究,分析了影响性能的主要因素,在估计Web负载时考虑了请求强度以及Web服务器自身的性能,提出了处理能力异构服务器群集的最小负载调度算法。在算法中还同时考虑了服务器在进入临界状态时性能急剧下降的因素,避免群集进入临界状态.新算法能较为准确地跟踪群集系统的负载,更好地
均衡分配负载。  相似文献   

18.
In this paper, we investigate Cloud computing resource provisioning to extend the computing capacity of local clusters in the presence of failures. We consider three steps in the resource provisioning including resource brokering, dispatch sequences, and scheduling. The proposed brokering strategy is based on the stochastic analysis of routing in distributed parallel queues and takes into account the response time of the Cloud provider and the local cluster while considering computing cost of both sides. Moreover, we propose dispatching with probabilistic and deterministic sequences to redirect requests to the resource providers. We also incorporate checkpointing in some well-known scheduling algorithms to provide a fault-tolerant environment. We propose two cost-aware and failure-aware provisioning policies that can be utilized by an organization that operates a cluster managed by virtual machine technology, and seeks to use resources from a public Cloud provider. Simulation results demonstrate that the proposed policies improve the response time of users’ requests by a factor of 4.10 under a moderate load with a limited cost on a public Cloud.  相似文献   

19.
On‐demand broadcast is an effective data dissemination approach in mobile computing environments. Most of the recent studies on on‐demand data broadcast assume that clients request only a single‐data‐object at a time. This assumption may not be practical for the increasingly sophisticated mobile applications. In this paper, we investigate the scheduling problem of time‐critical requests for multiple data objects in on‐demand broadcast environments and observe that existing scheduling algorithms designed for single‐data‐object requests perform unsatisfactorily in this new setting. Based on our analysis, we propose new algorithms to improve the system performance. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
Web服务器集群请求分配和选择的性能分析   总被引:29,自引:2,他引:29  
林闯 《计算机学报》2000,23(5):500-508
讨论并提出了 Web服务器集群的请求分配和选择控制方案 ,而且提供了这些方案的随机高级 Petri网模型 ,并强调研究这些方案及性能模型和分析方法 .为解决模型状态空间爆炸问题 ,作者提出了一种近似性能分析技术 ,可以显著地简化模型求解的复杂性 .文中的 Web服务器集群模型、请求分配和选择控制方案及近似性能分析技术可以应用于这类复杂系统的性能评价 .  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号