首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
基于Web内容的集群服务器请求分配系统的研究与实现   总被引:1,自引:0,他引:1  
随着基于Internet上的Web应用服务迅速发展,提供高性能、高可靠性和高扩展性的各种Web服务已成为用户的迫切需求。通过对现有的请求分配策略的分析,提出一种综合考虑后端服务器的缓存局域性的基于Web请求内容的集群服务器负载分发策略的设计与实现。  相似文献   

2.
3.
基于分散式查找的Web服务器集群   总被引:2,自引:0,他引:2  
随着www用户快速增加,人们对Web服务器的需求也越来越高,集群是解决这个问题的一种重要方案。但是现有的Web服务器集群还存在自适应性差、有系统瓶颈等不足。该文在分散式查找算法Pastry的基础之上,提出,一种新的Web服务器集群方案。它具有分散式查找算法的诸多优点,如可扩展、自组织、高容错和分散特性,较好地克服了传统Web服务器集群方案的不足。  相似文献   

4.
Web集群负载均衡算法比较   总被引:3,自引:0,他引:3  
邱钊  陈明锐 《现代计算机》2006,(8):61-63,90
随着互联网应用的普及,对Web服务器的性能要求越来越高.采用多台主机组成集群统一对外提供Web服务,是目前比较流行的高性价比、高可靠性、可伸缩性的方案,而集群系统的性能关键在于均衡算法.本文基于LVS项目,分析并通过实验比较了各种均衡算法的性能,对构建Web集群系统具有较大的指导意义.  相似文献   

5.
Social media streaming has become one of the most popular applications over the Internet. We have witnessed the successful deployment of commercial systems with CDN (Content Delivery Network)- based engines, but they suffer from excessive costs for deploying dedicated servers. And with the further expansions on network traffic of social media streaming, a cost-effective solution remains an illusive goal. The emergence of cloud computing sets out to meet the challenge by dynamically leasing cloud servers. This paper aims to realize the capacity migration of social media systems to clouds at the reduced cost. Firstly, by lowering the capacity requested from clouds to reduce the capacity migration cost. Based on the crawled data from YouTube which is the most representative online social media, we find that with larger than 90% probability, the YouTube user’s all requested videos are within three hops of related videos. Then the three hops of related videos are regarded as a cluster and a user’s request can be partly satisfied by other users who watch videos in the same cluster to lessen the capacity requested from clouds. Therefore the capacity migration for clusters is under the P2P (Peer-to-Peer) paradigm and a cloud-assisted P2P social media system is proposed. Secondly, given the diverse capacities, cost, limited lease size of cloud servers, we formulate an optimization problem about how to lease cloud servers to minimize the leasing cost and a heuristic solution is presented. The evaluation based on the crawled data from a cluster of YouTube videos shows the efficiency of the proposed schemes.  相似文献   

6.
陈利平 《微机发展》2008,(9):216-218
在高校网络系统中,随着客户机数量和密集性任务的增加,单个Web服务器受到处理能力的限制,已经成为网络访问的新瓶颈。若增加Web服务器缓解资源的紧张,则可能造成成本增加,设备闲置。因此,Web服务器具备高可用性将成为解决这一问题的最佳方法。在综合考虑选课系统中主要应用的算法基础上,以概率动态分布为基础,综合运筹学中的排队论原理,建立一种应用在高校选课系统中的多通道等待服务台模型。实践结果证明,文中提出的模型应用在高校选课系统中,减少运营成本,提高服务水平效果。  相似文献   

7.
在对服务器集群Web QoS控制基础上,综合考虑请求内容和各服务器性能以及当前整个集群负载平衡状况,设计了一种基于L4/L7双层分配的混合负载平衡调度策略,算法引入了一个反馈环节动态地改变Web服务器的权值,通过负载平衡程度的阈值进行判断,选择不同的调度策略,从而提高了Web集群系统的性能。  相似文献   

8.
Web集群服务器的分离式调度策略   总被引:9,自引:3,他引:9  
主要用排队论方法讨论了Web集群整体性能与请求调度策略之间的关系,所获得的结论是:在Web集群非过载情况下,一部分后端服务器仅处理静态请求而另一部分后端服务器仅处理动态请求的分离式调度策略要好于所有后端服务器既处理静态请求又处理动态请求的混合式调度策略。用SPECweb99测试工具所做的实际测试更进一步证明:当负载参数为120个连接时,采用分离式调度策略的Web集群服务器可完成63个连接,而采用混合式调度策略的Web集群服务器仅能完成36个连接,性能提高了22.5%。  相似文献   

9.
The widespread adoption of high speed Internet access and it’s usage for everyday tasks are causing profound changes in users’ expectations in terms of Web site performance and reliability. At the same time, server management is living a period of changes with the emergence of the cloud computing paradigm that enables scaling server infrastructures within minutes. To help set performance objectives for maximizing user satisfaction and sales, while minimizing the number of servers and their cost, we present a methodology to determine how user sales are affected as response time increases. We begin with the characterization of more than 6 months of Web performance measurements, followed by the study of how the fraction of buyers in the workload is higher at peak traffic times, to then build a model of sales through a learning process using a 5-year sales dataset. Finally, we present our evaluation of high response time on users for popular applications found in the Web.  相似文献   

10.
分布式数据流处理系统的动态负载平衡技术   总被引:4,自引:0,他引:4  
设计了一种新的大规模分布式数据流处理系统的体系结构。系统由一组异构的服务器集群组成,负载在每个服务器集群内部多台同构的服务器之间获得平衡,从而达到整个系统的负载平衡。集群设计的主要目标之一是以资源换性能,服务器集群中服务器的最大数目足够保证系统不再发生过载现象,不再需要会降低性能的卸载技术。而且投入运行的服务器的数目根据实际的系统负载来决定,负载较轻时,一部分服务器可以进入休眠状态来减少能源的消耗。根据系统动态增减服务器的特点,设计了全新的初始化算法、动态负载平衡算法。与以前的分布式数据流处理系统相比,由于单个集群的服务器的数目大大减少,算法复杂性降低、速度加快、优化的空间增大。  相似文献   

11.
Replication of information across multiple servers is becoming a common approach to support popular Web sites. A distributed architecture with some mechanisms to assign client requests to Web servers is more scalable than any centralized or mirrored architecture. In this paper, we consider distributed systems in which the Authoritative Domain Name Server (ADNS) of the Web site takes the request dispatcher role by mapping the URL hostname into the IP address of a visible node, that is, a Web server or a Web cluster interface. This architecture can support local and geographical distribution of the Web servers. However, the ADNS controls only a very small fraction of the requests reaching the Web site because the address mapping is not requested for each client access. Indeed, to reduce Internet traffic, address resolution is cached at various name servers for a time-to-live (TTL) period. This opens an entirely new set of problems that traditional centralized schedulers of parallel/distributed systems do not have to face. The heterogeneity assumption on Web node capacity, which is much more likely in practice, increases the order of complexity of the request assignment problem and severely affects the applicability and performance of the existing load sharing algorithms. We propose new assignment strategies, namely adaptive TTL schemes, which tailor the TTL value for each address mapping instead of using a fixed value for all mapping requests. The adaptive TTL schemes are able to address both the nonuniformity of client requests and the heterogeneous capacity of Web server nodes. Extensive simulations show that the proposed algorithms are very effective in avoiding node overload, even for high levels of heterogeneity and limited ADNS control  相似文献   

12.
Modern Web-based application infrastructures are based on clustered multitiered architectures, where request distribution occurs in two sequential stages: over a cluster of Web servers and over a cluster of application servers. Much work has focused on strategies for distributing requests across a Web server cluster in order to improve the overall throughput across the cluster. The strategies applied at the application layer are the same as those at the Web server layer because it is assumed that they transfer directly. In this paper, we argue that the problem of distributing requests across an application server cluster is fundamentally different from the Web server request distribution problem due to core differences in request processing in Web and application servers. We devise an approach for distributing requests across a cluster of application servers such that the overall system throughput is enhanced, and load across the application servers is balanced.  相似文献   

13.
Globally distributed content delivery   总被引:1,自引:0,他引:1  
When we launched the Akamai system in early 1999, it initially delivered only Web objects (images and documents). It has since evolved to distribute dynamically generated pages and even applications to the network's edge, providing customers with on-demand bandwidth and computing capacity. This reduces content providers' infrastructure requirements, and lets them deploy or expand services more quickly and easily. Our current system has more than 12,000 servers in over 1,000 networks. Operating servers in many locations poses many technical challenges, including how to direct user requests to appropriate servers, how to handle failures, how to monitor and control the servers, and how to update software across the system. We describe our system and how we've managed these challenges.  相似文献   

14.
Web服务社区是将功能类似的Web服务集中到一起为用户提供服务,当大量用户同时访问社区时,会出现排队现象。用户在社区中排队时会占用一部分资源,从而会产生一部分额外的成本。为此研究了如何设置最优的服务数使得这部分的额外成本最小。给出Web服务社区的定义,将在服务社区中排队的问题映射成排队论问题,确定排队模型为M/M/n。计算在稳定状态时用户的排队长度,得出在社区中排队的用户数量。结合Web服务社区中的成本因素确定成本函数,结合经济学中的边际分析法求出最佳的服务数。实验表明:该方法可以有效地找出最优的服务数,且效率较高。  相似文献   

15.
Web集群系统负载均衡策略分析与研究   总被引:8,自引:4,他引:8  
Web集群技术是解决Web服务器系统容量和伸缩能力的重要方法。该文分析了影响Web集群系统性能的主要因素,并提出了一种基于内容的负载均衡算法。该算法以加权负载量来评估服务器负载状态,并通过保证负载局部性来提高cache命中率,从而获得好的负载均衡效果。仿真实验证明,该算法具有较好的适应能力和伸缩性。  相似文献   

16.
通过工作站或者PC集群利用客户机内容请求来进行负载分配,这种模式具有很多吸引人的特性,但当前Internet标准去实现以基于内容的负载分配有相当的困难。论述了为达到基于内容的负载分配的可缩放性服务(如WWW服务)的几种方法,并举例说明了这些方法的优缺点,介绍了其中能够实现基于内容的负载分配的一种方法:握手连接。详细介绍了一种由TCP握手连接扩展的向后兼容的协议。  相似文献   

17.
WWW集群服务器的数据副本分布方式研究   总被引:7,自引:0,他引:7  
为了有效地提高WWW服务器的吞吐能力、反应速度和可扩展性,国际上许多著名站点纷纷转向采用WWW集群服务器来替代原有的单一主机服务器.采用不同副本分布方式的WWW集群服务器,其数据可靠性也有所不同.对不同数据副本分布方式进行探讨,同时,论证了最优副本分布方案.  相似文献   

18.
一种异构Web服务器集群动态负载均衡算法   总被引:35,自引:0,他引:35  
针对Web服务器集群系统中负载动态变化特性,提出了一种临界加速递减动态请求负载分配算法.通过负载权值的等效变换更准确地反映集群中单台服务器的当前负载状态;通过临界递减因子来有效抑制服务器可能出现的“拒绝访问”现象;通过随机概率分配方式替代固定转发分配方式,使访问负载的分布更均匀;通过实际测试获取算法中所需的计算参数,使配置操作更为简单.实验结果表明,该算法对较大负载的文件集的大密度访问情况效果明显。  相似文献   

19.
一个虚拟Internet服务器的设计与实现   总被引:11,自引:0,他引:11  
针对已有的解决Internet服务器性能瓶颈和可靠性问题的方法所存在的不足,提出基于IP层负载平衡调度的解决方法,将一组服务器构成一个可伸缩的、高可用的虚拟Internet服务器.通过在服务机群中透明地加入和删除结点以实现系统的伸缩性;通过检测结点或服务进程故障和正确地重置系统达到高可用性.详细讨论了虚拟Internet服务器的体系结构、设计方法和实现技术,并给出了相应的性能测试结果.  相似文献   

20.
Developing energy-efficient clusters not only can reduce power electricity cost but also can improve system reliability. Existing scheduling strategies developed for energy-efficient clusters conserve energy at the cost of performance. The performance problem becomes especially apparent when cluster computing systems are heavily loaded. To address this issue, we propose in this paper a novel scheduling strategy–adaptive energy-efficient scheduling or AEES–for aperiodic and independent real-time tasks on heterogeneous clusters with dynamic voltage scaling. The AEES scheme aims to adaptively adjust voltages according to the workload conditions of a cluster, thereby making the best trade-offs between energy conservation and schedulability. When the cluster is heavily loaded, AEES considers voltage levels of both new tasks and running tasks to meet tasks’ deadlines. Under light load, AEES aggressively reduces the voltage levels to conserve energy while maintaining higher guarantee ratios. We conducted extensive experiments to compare AEES with an existing algorithm–MEG, as well as two baseline algorithms–MELV, MEHV. Experimental results show that AEES significantly improves the scheduling quality of MELV, MEHV and MEG.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号