首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 140 毫秒
1.
通过整合基于服务器/客户机模式的集中式查找方案和基于对等计算模式的分散式查找方案,提出了一种混合式查找方案。该方案既有可扩展性、高容错性和自组织性等分散式查找方案的优点,又有利于集中管理和控制,安全性好等集中式查找方案的优势。仿真实验结果表明,该方案的RDP指标优于常规的分散式查找方案,实际查询时间开销更是远远少于常规的分散式查找方案。  相似文献   

2.
顾诚  罗建 《计算机应用》2003,23(7):40-41
如何根据网络任务的忙闲变化,对Web服务器进行调度管理,是Web应用中面临的新问题之一。目前,由于追求为用户提供快速及时的应答服务,越来越多的系统采用了Web服务器集群技术。文中针对Web服务器的网络任务特点,以排队论为工具,提出了一种新的Web服务器集群管理算法。该算法通过调整服务器数目来增减处理能力,实现对Web服务器集群的可伸缩管理。  相似文献   

3.
嵌入式Web集群服务器节能机制的研究与实现   总被引:1,自引:0,他引:1       下载免费PDF全文
刘峥 《计算机工程》2007,33(13):138-140,143
提出一种针对嵌入式Web集群服务器的能耗节约机制,将其应用于基于Linux的嵌入式Web集群平台。分别探讨了集群平台的前端和后端服务器的节能方案:通过动态负载估算,在前端设计了控制后端服务器的开关算法,并在该算法中实现了阈值机制;后端服务器使用DVS策略,使之可根据负载情况动态调整处理器的工作频率。实验结果表明,和标准的Linux内核以及已有的节能方案相比,该方案节能效果显著。  相似文献   

4.
近年来,Web服务器集群技术以其扩展性好、处理能力强等优点受到了国内外研究机构的广泛关注和应用,其中的负载均衡算法更是研究的热点;对Web服务器集群负载均衡进行了研究,为有效地实现任务的均衡分配,提出一种了根据Web服务器系统中各个服务器的负载情况进行动态任务分配的遗传算法,实验表明该方法能有效地实现Web服务器集群的负载均衡服务。  相似文献   

5.
Web QoS是当前Web计算中的重要研完内容,用于解决Web计算环境中端到端QoS保证问题。本文提出一种支持QoS的Web服务器集群系统方案以及实现技术,整个系统由支持QoS的集群交换机和Web服务器集群两级结构组成,通过集群交换机和Web服务器的协同工作,实现优先转发与优先服务相结合的分布式QoS保证体系,充分满足Web用户的服务质量需求。通过系统测试,原型系统的性能指标达到了设计要求。  相似文献   

6.
Web集群负载均衡算法比较   总被引:3,自引:0,他引:3  
邱钊  陈明锐 《现代计算机》2006,(8):61-63,90
随着互联网应用的普及,对Web服务器的性能要求越来越高.采用多台主机组成集群统一对外提供Web服务,是目前比较流行的高性价比、高可靠性、可伸缩性的方案,而集群系统的性能关键在于均衡算法.本文基于LVS项目,分析并通过实验比较了各种均衡算法的性能,对构建Web集群系统具有较大的指导意义.  相似文献   

7.
具有区分服务等级的可扩展并行服务器集群   总被引:8,自引:3,他引:5  
铁玲  诸鸿文  戎蒙恬 《计算机工程》2001,27(1):28-29,59
采用并行Web服务器集群技术实现高性能的WEB服务器服务已经成为一种趋势,该技术具有性能高,可扩展性好,可靠性高、成本低等特点。文章介绍了目前Web服务器集群的结构和几种关键技术,在此基础上设计一种具有区分服务等级的Web服务器集群,并给出结构描述和资源管理的策略。  相似文献   

8.
Web服务器要求提供不间断的服务能力,本文提出一种基于Linux集群方案,构建高可用性的Web服务器群,并对其原理和构建过程进行了讨论。  相似文献   

9.
《微型机与应用》2017,(4):10-13
Linux虚拟服务器(Linux Virtual Server,LVS)技术是一种广泛应用于企业集群中的负载均衡技术,目前关于LVS的研究主要是基于Web服务器集群负载均衡性能方面。在一些实际应用场景中,LVS可以直接与数据库集群相结合。将LVS与数据库集群结合,提出一种对LVS架构下数据库集群性能进行测试的方案,并利用HP LoadRunner对负载均衡算法的数据库集群进行负载测试实验,通过数据分析比较,得出LVS提供的算法中较适合数据库集群的调度算法。  相似文献   

10.
Web集群服务器已被广泛用来提高Web服务器的性能。如何保证Web服务的服务质量(QoS)是一个迫切需要解决的问题。区分服务已成为QoS研究领域中的一个焦点。本文分析了Web请求服务时间的数字特性,并使用M/G/1 FCFS排队模型对Web服务器及Web集群服务器进行了建模。在对模型进行分析的基础上,设计并实现了一种异构Web集群服务器中的比例伸展因子区分服务的方案,并提出了基于概率空间的请求调度算法。请求被分成多个类,无论系统的负载如何,系统确保每类请求的平均伸展因子与事先给定的参数成正比。实际测试表明,所设计的方案满足相对区分服务的可预测性和可控制性的要求。  相似文献   

11.
Modern Web-based application infrastructures are based on clustered multitiered architectures, where request distribution occurs in two sequential stages: over a cluster of Web servers and over a cluster of application servers. Much work has focused on strategies for distributing requests across a Web server cluster in order to improve the overall throughput across the cluster. The strategies applied at the application layer are the same as those at the Web server layer because it is assumed that they transfer directly. In this paper, we argue that the problem of distributing requests across an application server cluster is fundamentally different from the Web server request distribution problem due to core differences in request processing in Web and application servers. We devise an approach for distributing requests across a cluster of application servers such that the overall system throughput is enhanced, and load across the application servers is balanced.  相似文献   

12.
While the World Wide Web (www) may appear to be intrinsically scalable through the distribution of files across a series of decentralized servers, there are instances where this form of load distribution is both costly and resource intensive. In such cases it may be necessary to administer a centrally located and managed http server. Given the exponential growth of the internet in general, and www in particular, it is increasingly more difficult for persons and organizations to properly anticipate their future http server needs, both in human resources and hardware requirements. It is the purpose of this paper to outline the methodology used at the National Center for Supercomputing Applications in building a scalable World Wide Web server. The implementation described in the following pages allows for dynamic scalability by rotating through a pool of http servers that are alternately mapped to the hostname alias of the www server. The key components of this configuration include: (1) cluster of identically configured http servers; (2) use of Round-Robin DNS for distributing http requests across the cluster; (3) use of distributed File System mechanism for maintaining a synchronized set of documents across the cluster; and (4) method for administering the cluster.The result of this design is that we are able to add any number of servers to the available pool, dynamically increasing the load capacity of the virtual server. Implementation of this concept has eliminated perceived and real vulnerabilities in our single-server model that had negatively impacted our user community. This particular design has also eliminated the single point of failure inherent in our single-server configuration, increasing the likelihood for continued and sustained availability. while the load is currently distributed in an unpredictable and, at times, deleterious manner, early implementation and maintenance of this configuration have proven promising and effective.  相似文献   

13.
High-volume Web sites often use clusters of servers to support their architectures. A load balancer in front of such clusters directs requests to the various servers in a way that equalizes, as much as possible, the load placed on each. There are two basic approaches to scaling Web clusters: adding more servers of the same type (scaling out, or horizontally) or upgrading the capacity of the servers in the cluster (scaling up, or vertically). Although more detailed and complex models would be required to obtain more accurate results about such systems' behavior, simple queuing theory provides a reasonable abstraction level to shed some insight on which scaling approach to employ in various scenarios. Typical issues in Web cluster design include: whether to use a large number of low-capacity inexpensive servers or a small number of high-capacity costly servers to provide a given performance level; how many servers of a given type are required to provide a certain performance level at a given cost; and how many servers are needed to build a Web site with a given reliability. Using queuing theory, I examine the average response time, capacity, cost, and reliability tradeoffs involved in designing Web server clusters.  相似文献   

14.
State-of-the-art cluster-based data centers consisting of three tiers (Web server, application server, and database server) are being used to host complex Web services such as e-commerce applications. The application server handles dynamic and sensitive Web contents that need protection from eavesdropping, tampering, and forgery. Although the secure sockets layer (SSL) is the most popular protocol to provide a secure channel between a client and a cluster-based network server, its high overhead degrades the server performance considerably and, thus, affects the server scalability. Therefore, improving the performance of SSL-enabled network servers is critical for designing scalable and high-performance data centers. In this paper, we examine the impact of SSL offering and SSL-session-aware distribution in cluster-based network servers. We propose a back-end forwarding scheme, called ssl_with_bf, that employs a low-overhead user-level communication mechanism like virtual interface architecture (VIA) to achieve a good load balance among server nodes. We compare three distribution models for network servers, round robin (RR), ssl_with_session, and ssl_with_bf, through simulation. The experimental results with 16-node and 32-node cluster configurations show that, although the session reuse of ssl_with_session is critical to improve the performance of application servers, the proposed back-end forwarding scheme can further enhance the performance due to better load balancing. The ssl_with_bf scheme can minimize the average latency by about 40 percent and improve throughput across a variety of workloads.  相似文献   

15.
With increasing richness in features such as personalization of content, Web applications are becoming increasingly complex and hence compute intensive. Traditional approaches for improving performance of static content Web sites have been based on the assumption that static content such as images are network intensive. However, these methods are not applicable to the dynamic content applications which are more compute intensive than static content. This paper proposes a suite of algorithms which jointly optimize the performance of dynamic content applications by reducing the client access times while also minimizing the resource utilization. A server migration algorithm allocates servers on-demand within a cluster such that the client access times are not affected even under sudden overload conditions. Further, a server selection mechanism enables statistical multiplexing of resources across clusters by redirecting requests away from overloaded clusters. We also propose a cluster decision algorithm which decides whether to migrate in additional servers at the local cluster or redirect requests remotely under different workload conditions. Through a combination of analytical modeling, trace-driven simulation over traces from large e-commerce sites and testbed implementation, we explore the performance savings achieved by the proposed algorithms.  相似文献   

16.
Nowadays, we are witnessing an increasing growth of Web 2.0 content such as micronews, blogs and RSS feeds. This trend exemplified by applications like Twitter and LiveJournal is starting to slow down not only by the limitations of existing services – proprietary and centralized, but also by the cumbersome process of discovering and tracking interesting content. This content is generally ephemeral and thus difficult to index by conventional Web search technology. This problem is exacerbated by the passive role adopted by Web content providers: it is surprising that Web servers publish information and expect that thousands of other servers (search engines, Web-based aggregators like GoogleNews, etc.) advertise their content to the world.In this work we propose p2pWeb, an open, decentralized infrastructure to enable Web servers to use their spare capacity to filter out, aggregate and disseminate Web content in a scalable and timely manner. p2pWeb is flexible enough to support a broad variety of services. The main property of p2pWeb is that all communication abstractions, including aggregation and multicast, are implemented hierarchically and using the HTTP protocol. Simulation results certify the viability of our approach.  相似文献   

17.
In this paper, a distributed Web and cache server called MOWS is described. MOWS is written in Java and built from modules that can be loaded locally or remotely. These modules implement various features of Web and cache servers and enable MOWS to run as a cluster of distributed Web servers. In addition to its distributed nature, MOWS can integrate external services using its own external interface. Java programs conforming to this interface can be loaded locally or remotely and executed at the server. The resulting system will potentially provide effective Web access by both utilizing commonly available computing resources and offering distributed server functionality. Design considerations and the system architecture of MOWS are described and several applications of MOWS are described to show the benefits of MOWS.  相似文献   

18.
《Advanced Robotics》2013,27(8):913-932
In this paper, an attempt has been made to incorporate some special features in the conventional particle swarm optimization (PSO) technique for decentralized swarm agents. The modified particle swarm algorithm (MPSA) for the self-organization of decentralized swarm agents is proposed and studied. In the MPSA, the update rule of the best agent in a swarm is based on a proportional control concept and the fitness of each agent is evaluated on-line. The virtual zone is developed to avoid conflict among the agents. In this scheme, each agent self-organizes to flock to the best agent in a swarm and migrate to a moving target while avoiding obstacles and collision among agents. Aided by these advantages such as cooperative group behaviors, flexible formation and scalability, the proposed approach enables large-scale swarm agents to distribute themselves optimally for a given task. The simulation results have shown that the proposed scheme effectively constructs a self-organized swarm system with the capability of flocking and migration.  相似文献   

19.
基于机群的可扩展计算机网络已逐渐成为高性能网络服务器架构的基础。文章对几种典型的机群构架进行了研究,包括RR-DNS—Web服务器 AFS服务器、TCP路由器 Web服务器、主-从服务器等,对它们的并行性问题,例如可扩展性、可靠性以及负载平衡策略等,作了深入的比较分析。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号