首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 140 毫秒
1.
可伸缩网络服务的研究与实现   总被引:1,自引:0,他引:1       下载免费PDF全文
首先 ,本文提出基于集群的可伸缩网络服务体系结构Linux Virtual Server(简称为 LVS) ,它分为负载调度器、服务器池和后端存储三层结构。负载调度器采用 IP负载均衡技术和基于内容请求分发技术。LVS集群提供了负载平衡、可伸缩性和高可用性 ,可以用于建立很多可伸缩网络服务。进而 ,提出地理分布的 LVS集群系统 ,它可节约网络带宽 ,改善网络服务质量 ,具有良好的抗灾害性。在 IP负载均衡技术上 ,针对网络服务非对称性的特点 ,为克服 VS/NAT伸缩能力差的缺点 ,本文提出通过IP隧道和直接路由实现虚拟服务器的两种方法 VS/TUN和 VS…  相似文献   

2.
传统的集群服务器依靠前端分发器基于负载均衡来分发客户请求,而ASAS集群采用了基于后端服务器负载分析后的主动调度策略。控制ASAS集群后端服务器的CPU利用客户请求的响应时间,或者连接容量,却很难建立一个精确的数学模型,因此传统的控制器很难控制。这里设计了一个模糊控制算法,控制算法的输入是CPU利用率和客户请求的响应时间,输出是连接数,试验表明这种模糊控制策略取得了很好的性能表现。  相似文献   

3.
一种Web集群系统的动态分离式调度策略   总被引:1,自引:0,他引:1       下载免费PDF全文
静态分离式调度策略(SSSP)不能有效地分配服务器资源。动态分离式调度策略(DSSP)对静态请求和动态请求分别以请求的文件和用户会话为单元进行调度。请求分发器监测后端服务器的状态,按资源使用情况将服务器区分为轻载、重载和过载,轻载服务器可以接收新的请求单元,重载服务器不接收新请求单元,但继续为已接收的请求单元服务,过载服务器迁移部分请求单元到轻载服务器。试验结果表明,DSSP的效率明显优于SSSP。  相似文献   

4.
传统的集群服务器依靠前端分发器基于负载均衡来分发客户请求,而ASAS集群采用了基于后端服务器负载分析后的主动调度策略。控制ASAS集群后端服务器的CPU利用客户请求的响应时间,或者连接容量,却很难建立一个精确的数学模型,因此传统的控制器很难控制。这里设计了一个模糊控制算法,控制算法的输入是CPU利用率和客户请求的响应时间,输出是连接数,试验表明这种模糊控制策略取得了很好的性能表现。  相似文献   

5.
在分析现在和将来网络服务需求的基础上, 提出基于集群的可伸缩网络服务体系结构Linux Virtual Server, 分为负载调度器、服务器池和后端存储三层结构。负载调度器采用IP 负载均衡技术和基于内容请求分发技术。LVS 集群提供了负载平衡、可伸缩性和高可用性, 可以应用于建立很多可伸缩网络服务。进而提出地理分布的LVS 集群系统, 可节约网络带宽, 改善网络服务质量, 具有良好的抗灾害性。  相似文献   

6.
关于调度算法与Web集群性能的分析   总被引:7,自引:1,他引:7  
用排队论方法分析负载均衡型调度算法和Locality型调度算法对Web集群服务器性能的影响,所获得的结论有:(1)由于充分应用后端结点的主存资源,所以应用Locality型调度算法时的Web集群服务器的性能要好于应用负载均衡型调度算法时的性能;(2)Locality型调度算法中,完全基于请清内容来分发请求将导致负载失衡现象的产生,而必须允许适量的文件复制,才能使Web集群服务器的性能显著提高。  相似文献   

7.
本文针对网络服务商普遍面临的的高并发访问量导致响应速度迟缓的问题进行分析,研究了基于LVS(Linux虚拟服务器)和KEEPALIVED(交换机制软件)的高可用负载均衡的原理,实现了一个可伸缩的、高可用的并具有容错容灾能力的负载均衡系统,该系统采用VS/DR的模式搭建,系统基于IP层和基于内容请求分发的负载平衡调度解决算法,将这些算法在Linux内核中进行实现,并对系统的高可用和负载均衡进行实验,实验证明,该负载均衡系统能对客户的请求进行分流,能有效减轻单个服务器的负载压力.  相似文献   

8.
基于Web内容的集群服务器请求分配系统的研究与实现   总被引:1,自引:0,他引:1  
随着基于Internet上的Web应用服务迅速发展,提供高性能、高可靠性和高扩展性的各种Web服务已成为用户的迫切需求。通过对现有的请求分配策略的分析,提出一种综合考虑后端服务器的缓存局域性的基于Web请求内容的集群服务器负载分发策略的设计与实现。  相似文献   

9.
陈耿珉  晏蒲柳等 《计算机工程》2003,29(1):126-127,148
随着基于Internet上的Web应用服务迅速发展,提供高性能、高可靠性和高扩展性的各种Web服务已成为用户的迫切需求,通过对现有的请求分配策略的分析,提出一种综合考虑后端服务器的缓存局域性的基于Web请求内容的集群服务器负载分发策略的设计与实现。  相似文献   

10.
随着互联网迅猛发展,研究Web请求的调度分配技术已成为国内外研究的热点。针对现有Web调度器存在着可扩展性差、系统开销大和无通用性诸多缺陷,设计和实现了一种新 的、通用的、可扩展性好的基于内容的通用调度器Cuttle。该系统在内核TCP/IP层实现,采用伪服务器、捎带技术、截获、伪装三次握手、Mix-LARD调度策略等技术。实验表明,Cuttle的可扩展性好,响应延迟小。  相似文献   

11.
内核TCP虚拟服务器(Kernel TCP Virtual Server,简称KTCPVS)是Linux服务器集群系统的一个子项目,旨在解决客户/服务器模型网络中服务器瓶颈这一问题而提出的。文章介绍了内核TCP虚拟服务器的由来,以及相对于早期TCP网关的优越性;对KTCPVS的体系结构进行了阐述;通过对其源码的阅读研究,分析如何在Linux内核层中实现网络数据在客户和服务器之间的交互。  相似文献   

12.
Workflow management systems have been widely used in many business process management (BPM) applications. There are also a lot of companies offering commercial software solutions for BPM. However, most of them adopt a simple client/server architecture with one single centralized workflow-management server only. As the number of incoming workflow requests increases, the single workflow-management server might become the performance bottleneck, leading to unacceptable response time. Development of parallel servers might be a possible solution. However, a parallel server architecture with a fixed-number of servers cannot efficiently utilize computing resources under time-varying system workloads. This paper presents a distributed workflow-management server architecture which adopts dynamic resource provisioning mechanisms to deal with the probable performance bottleneck. We implemented a prototype system of the proposed architecture based on a commercial workflow management system, Agentflow. A series of experiments were conducted on the prototype system for performance evaluation. The experimental results indicate that the proposed architecture can deliver scalable performance and effectively maintain stable request response time under a wide range of incoming workflow request workloads.  相似文献   

13.
It is widely recognized that the JPEG2000 facilitates issues in medical imaging: storage, communication, sharing, remote access, interoperability, and presentation scalability. Therefore, JPEG2000 support was added to the DICOM standard Supplement 61. Two approaches to support JPEG2000 medical image are explicitly defined by the DICOM standard: replacing the DICOM image format with corresponding JPEG2000 codestream, or by the Pixel Data Provider service, DICOM supplement 106. The latest one supposes two-step retrieval of medical image: DICOM request and response from a DICOM server, and then JPIP request and response from a JPEG2000 server. We propose a novel strategy for transmission of scalable JPEG2000 images extracted from a single codestream over DICOM network using the DICOM Private Data Element without sacrificing system interoperability. It employs the request redirection paradigm: DICOM request and response from JPEG2000 server through DICOM server. The paper presents programming solution for implementation of request redirection paradigm in a DICOM transparent manner.  相似文献   

14.
KTCPVS:一个内核Layer-7交换机   总被引:4,自引:2,他引:2       下载免费PDF全文
本文提出了在操作系统内核中实现Layer-7交换机的方法。利用内核线程进行基于内容请求分发,可避免多次内核与用户层的切换和内存复制开销,极大地提高了Layer-7交换的效率。本文详细讨论了内核Layer-7交换机(KTCPVS)的体系结构及其在Linux内核中的实现和性能测试。  相似文献   

15.
Because of their size, service times, and drain on server resources, multimedia objects require specialized replication systems in order to meet demand and ensure content availability. We present a novel method for creating replication systems where the replicated objects' sizes and/or per-object service times are large. Such replication systems are well-suited to delivering multimedia objects on the Internet. Assuming that user request patterns to the system are known, we show how to create replication systems that distribute read load to servers in proportion to their contribution to system capacity and experimentally show the positive load distribution properties of such systems. However, when user request patterns differ from what the system was designed for, system performance will be affected. Therefore, we also report on results that reveal (i) how server loads are affected and (ii) the impact two system design parameters (indicators of a system's load distribution qualities) have on server load when request patterns differ from that for which a system was designed.  相似文献   

16.
In this paper, we develop an end-to-end analysis of a distributed Video-on-Demand (VoD) system that includes an integrated model of the server and the network subsystems with analysis of its impact on client operations. The VoD system provides service to a heterogeneous client base at multiple playback rates. A class-based service model is developed where an incoming video request can specify a playback rate at which the data is consumed on the client. Using an analytical model, admission control conditions at the server and the network are derived for multi-rate service. We also develop client buffer requirements in presence of network delay bounds and delay jitter bounds using the same integrated framework of server and network subsystems. Results from an extensive simulation show that request handling policies based on limited redirection of blocked requests to other resources perform better than load sharing policies. The results also show that downgrading the service for blocked requests to a lower bitrate improves VoD system performance considerably. Combining the downgrade option with restrictions on access to high bitrate request classes is a powerful tool for manipulating an incoming request mix into a workload that the VoD system can handle.  相似文献   

17.
Modern Web-based application infrastructures are based on clustered multitiered architectures, where request distribution occurs in two sequential stages: over a cluster of Web servers and over a cluster of application servers. Much work has focused on strategies for distributing requests across a Web server cluster in order to improve the overall throughput across the cluster. The strategies applied at the application layer are the same as those at the Web server layer because it is assumed that they transfer directly. In this paper, we argue that the problem of distributing requests across an application server cluster is fundamentally different from the Web server request distribution problem due to core differences in request processing in Web and application servers. We devise an approach for distributing requests across a cluster of application servers such that the overall system throughput is enhanced, and load across the application servers is balanced.  相似文献   

18.
To provide ubiquitous access to the proliferating rich media on the Internet, scalable streaming servers must be able to provide differentiated services to various client requests. Recent advances of transcoding technology make network-I/O bandwidth usages at the server communication ports controllable by request schedulers on the fly. In this article, we propose a transcoding-enabled bandwidth allocation scheme for service differentiation on streaming servers. It aims to deliver high bit rate streams to high priority request classes without overcompromising low priority request classes. We investigate the problem of providing differentiated streaming services at application level in two aspects: stream bandwidth allocation and request scheduling. We formulate the bandwidth allocation problem as an optimization of a harmonic utility function of the stream quality factors and derive the optimal streaming bit rates for requests of different classes under various server load conditions. We prove that the optimal allocation, referred to as harmonic proportional allocation, not only maximizes the system utility function, but also guarantees proportional fair sharing between classes with different prespecified differentiation weights. We evaluate the allocation scheme, in combination with two popular request scheduling approaches, via extensive simulations and compare it with an absolute differentiation strategy and a proportional-share strategy tailored from relative differentiation in networking. Simulation results show that the harmonic proportional allocation scheme can meet the objective of relative differentiation in both short and long timescales and greatly enhance the service availability and maintain low queueing delay when the streaming system is highly loaded.  相似文献   

19.
组件应用服务器框架是一种特定形式的分布式对象系统平台,要求成为高可靠性的系统.这里指的可靠性主要是指错误容忍和错误恢复两个特性.本文的主要目标是建立基于分布式对象的组件应用服务器的软件容错服务框架.我们采用一种名叫对象容错服务(OFS)的办法解决对象容错,我们解决的问题包括:对象失效、节点错误、网络隔离和不可预知的通信延迟等.本文介绍了OFS的服务规范,并给出了一个OFS实现的系统结构.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号