首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
File downloads make up a large percentage of the Internet traffic to satisfy various clients using distributed environments for their Cloud, Grid and Internet applications. In particular, the Cloud has become a popular data storage provider and users (individuals and corporates) are relying heavily on it to keep their data. Furthermore, most cloud data servers replicate their data storage infrastructures and servers at various sites to meet the overall high demands of their clients and increase availability. However, most of them do not use that replication to enhance the download performance per client. To make use of this redundancy and to enhance the download speed, we introduce a fast and efficient concurrent technique for downloading large files from replicated Cloud data servers and traditional FTP servers as well. The technique, DDFTP utilizes the availability of replicated files on distributed servers to enhance file download times through concurrent downloads of file blocks from opposite directions in the files. DDFTP does not require coordination between the servers and relies on the in-order and reliability features of TCP to provide fast file downloads. In addition, DDFTP offers efficient load balancing among multiple heterogeneous data servers with minimal overhead. As a result, we can maximize network utilization while maintaining efficient load balancing on dynamic environments where resources, current loads and operational properties vary dynamically. We implemented and evaluated DDFTP and experimentally demonstrated considerable performance gains for file downloads compared to other concurrent/parallel file/data download models.  相似文献   

2.
面向因特网应用的容错和负载平衡管理   总被引:1,自引:0,他引:1       下载免费PDF全文
针对因特网应用的三层客户/服务器体系结构,本文设计并实现了一个面向冗余服务的管理系统OM。它采用分布对象技术,基于CORBA平台,为冗余服务提供了容错和负载平衡管理服务,同时它还为系统管理员提供了管理接口,为用户提供了对服务的透明访问机制。  相似文献   

3.
随着过去几十年互联网服务的指数增长,各大网站的访问量急剧上升。海量的用户请求使得热门网站的网络请求率可能在几秒钟内大规模增加。一旦服务器承受不住这样的高并发请求,由此带来的网络拥塞和延迟会极大地影响用户体验。负载均衡是高可用网络基础架构的关键组件,通过在后端引入一个负载均衡器,将工作负载分布到多个服务器来缓解海量并发请求对服务器造成的巨大压力,提高后端服务器和数据库的性能以及可靠性。而Nginx作为一款高性能的HTTP和反向代理服务器,正越来越多地应用到实践中。文中将分析Nginx服务器负载均衡的体系架构,研究默认的加权轮询算法,并提出一种改进后的动态负载均衡算法,实时收集负载信息,重新计算并分配权值。通过实验测试,对比不同算法下的负载均衡性能,改进后的算法能有效提高服务器集群的性能。  相似文献   

4.
实时通信主要传输实时音视频,具有低延时和高带宽消耗的特点.在用户量较大的场景下,单服务器方案无法满足整体需求,此时需搭建分布式集群对外提供服务,而如何将这些访问合理的分配到不同服务器上,均衡集群内服务器的负载就显得尤为重要.本文首先分析单服务器场景下的实时通信流程,然后研究和分析常见的负载均衡算法,同时为满足同群组客户端需转发到相同服务器的一致性要求,提出一种基于一致性哈希算法和遗传算法的自适应负载均衡算法,并对该算法进行应用和实验验证.  相似文献   

5.
提高冗余服务性能的动态容错算法   总被引:12,自引:0,他引:12  
钱方  贾焰  黄杰  顾晓波  邹鹏 《软件学报》2001,12(6):928-935
针对分布式应用的性能要求,引入了负载平衡机制,以便对activere plication和primary backup容错算法进行权衡.提出一种基于冗余服务的动态容错算法RAWA(read-any-write-any),能根据系统负载状况动态改变请求的quorum,不但提高了请求的处理速度,而且以一种简单、有效的方式实现了负载平衡.结合所提出的一致性维护和互斥访问机制,该算法可以适用于嵌套访问和状态服务.另外,还分析了RAWA算法的性能,并通过在CORBA平台上与其他容错算法的对比测试,证明RAWA算法在  相似文献   

6.
随着互联网技术的发展,互联网服务器集群的负载能力正在面临着前所未有的挑战,实现合理的负载均衡策略尤为重要。为了使负载均衡达到最佳的效率,可以使用一致性哈希算法来对集群负载均衡系统进行负载分配。针对微服务架构的服务器集群场景,对该集群负载均衡的特性进行分析,提出一种基于虚拟节点的一致性哈希环的设计与分割方法及基于动态权值的分配策略,在一致性哈希算法的基础上,实现服务集群之间的负载转移,解决微服务集群中服务负载增多,导致服务之间负载不均衡的问题,防止其中某些服务因为负载压力过大,导致服务崩溃的问题。经实验表明,与传统的一致性哈希算法相比,改进后的负载均衡策略负载不均衡的概率是原来的31%;并且动态分配策略具有良好的负载均衡性能,有效地解决了微服务分布式架构的负载均衡问题。  相似文献   

7.
Distributed file systems need to provide for fault tolerance. This is typically achieved with the replication of files. Existing approaches to the construction of replicated file systems sacrifice strong semantics (i.e. the guarantees the systems make to running computations when failures occur and/or files are accessed concurrently). This is done mainly for efficiency reasons. This paper puts forward a replicated file system protocol that enforces strong consistency semantics. Enforcing strong semantics allows for distributed systems to behave more like their centralized counterparts-an essential feature in order to provide the transparency that is so strived for in distributed computing systems. One characteristic of our protocol is its distributed nature. Because of it, the extra cost needed to ensure the stronger consistency is kept low (since the bottleneck problem noticed in primary-copy systems is avoided), load balancing is facilitated, clients can choose physically close servers, and the work required during failure handling and recovery is reduced. Another characteristic is that, instead of optimizing each operation type on its own, file system activity is viewed at the level of a file session and the costs of individual operations were able to be spread over the life of a file session. We have developed a prototype and compared its performance to both NFS and a nonreplicated version of the prototype that also achieves strong consistency semantics. Through these comparisons, the cost of replication and the cost of enforcing the strong consistency semantics are shown  相似文献   

8.
Cloud computing has become a promising paradigm as next generation computing model, by providing computation, software, data access, and storage services that do not need to know the location of physical resources interconnected across the globe providing such services. In such an environment, important issues as information sharing and resource/service discovery arise. In order to overcome critical limitations in centralized approaches for information sharing and resource/service discovery, this paper proposes a framework of a scalable multi-attribute hybrid overlay featured with decentralized information sharing, flexible resource/service discovery, fault tolerance and load balancing. Additionally, the proposed hybrid overlay integrates a structured P2P system with an unstructured one to support complex queries. Mechanisms such as load balancing and fault tolerance implemented in our proposed system to improve the overall system performance are also discussed. Experimental results show that the performance of the proposed approach is feasible and stable, as the proposed hybrid overlay improves system performance by reducing the number of routing hops and balancing the load by migrating requests.  相似文献   

9.
 As user load on web servers becomes more globalised, duplicated regional mirroring is being seen as an increasingly expensive solution to meeting regional peak demand. Alternative solutions are being explored by looking at dynamic load balancing using distributed intelligent middleware to re-route traffic from busy regions to quieter ones as global load patterns change over a 24 h cycle. The techniques used can also be employed under fault and planned maintenance conditions. One such solution, providing the load balancing via reconfigurable dynamic proxy servers, is seen as `unobtrusive' in that it works with standard web browsers and web server technology. The technique employs an evolutionary algorithm to perform combinatorial optimisation against a dynamic performance predicting model. This paper describes this solution, focussing on issues such as algorithm tuning, scalabilty and reliability. A prototype system is currently being trialled within the Systems Integration Department at British Telecommunications Research Labs, Adastral Park and is the subject of several BT held patents.  相似文献   

10.
LVS集群系统负载均衡策略的研究   总被引:3,自引:0,他引:3       下载免费PDF全文
负载均衡技术可以在当前的网络应用结构上有效地提高服务器处理能力,使系统在现有的结构下能够满足更多的用户同时访问所提供的服务。本文详细分析了Linux虚拟服务器(LVS)集群的负载均衡算法及其不足之处,提出了具有动态反馈的改进算法,并构建了一个LVS/NAT系统对改进的算法进行了验证。  相似文献   

11.
Server clusters for Internet services can yield both high performance and cost effectiveness. Contemporary approaches to cluster design must confront the tradeoff between dynamic load-balancing and efficiency when dispatching client requests to servers in the cluster. In this paper we describe a packet filtering-based RR scheduling scheme, intended to be an easily implemented scheme for hosting Internet services on a server cluster in a way transparent to clients. We take a non-centralized approach, in which client packets sent to the cluster's single IP address are broadcast to all of its servers. Packets requesting that a new TCP session be set up cause counters in each server to be incremented; if a server's counter matches its fixed unique ID, it takes charge of the session, else it ignores the packet. This yields a round-robin type algorithm. We describe this approach in detail, and present results of simulations that show it achieves higher performance (in terms of throughput and reliability) than similar approaches based on client-IP hashing and dispatcher-based RR.  相似文献   

12.
利用负载均衡技术,可以在当前的网络应用结构上有效地提高服务器数据处理能力,使系统在现有的客户机-服务器模式下,能够满足更多的用户同时访问所提供的服务。该文主要讨论了当前主要使用的几种负载均衡技术,针对这几种技术由于基本结构的限制并不适合于网关、防火墙的集群的情况,提出了一种具有高可用性的通用负载均衡技术。  相似文献   

13.
Modern distributed systems consisting of powerful workstations and high-speed interconnection networks are an economical alternative to special-purpose supercomputers. The technical issues that need to be addressed in exploiting the parallelism inherent in a distributed system include heterogeneity, high-latency communication, fault tolerance and dynamic load balancing. Current software systems for parallel programming provide little or no automatic support towards these issues and require users to be experts in fault-tolerant distributed computing. The Paralex system is aimed at exploring the extent to which the parallel application programmer can be liberated from the complexities of distributed systems. Paralex is a complete programming environment and makes extensive use of graphics to define, edit, execute, and debug parallel scientific applications. All of the necessary code for distributing the computation across a network and replicating it to achieve fault tolerance and dynamic load balancing is automatically generated by the system. In this paper we give an overview of Paralex and present our experiences with a prototype implementation  相似文献   

14.
一种基于网络地址转换的负载均衡算法   总被引:1,自引:4,他引:1  
本文探讨集群服务器使用的负载均衡技术及负载分配的策略,并将网络地址转换应用于VOD集群,将负载分给多个服务器分担,以解决VOD集群服务器面临的大量并发访问造成的CPU或I/O的高负载问题。为了达到最佳的负载均衡效果,负载均衡器需要根据各个服务器的当前CPU和I/O状态来分配负载,这就需要动态监视服务器的负载,并应用优化的负载分配策略,达到平均分配负载的目的。  相似文献   

15.
基于Web内容的集群服务器请求分配系统的研究与实现   总被引:1,自引:0,他引:1  
随着基于Internet上的Web应用服务迅速发展,提供高性能、高可靠性和高扩展性的各种Web服务已成为用户的迫切需求。通过对现有的请求分配策略的分析,提出一种综合考虑后端服务器的缓存局域性的基于Web请求内容的集群服务器负载分发策略的设计与实现。  相似文献   

16.
To improve response time of a Web site, one replicates the site on multiple servers. The effectiveness of a replicated server system will depend on how the incoming requests are distributed among replicas. A large number of load‐balancing strategies for Web server systems have been proposed. In this paper we describe a testbed that can be used to evaluate the performance of different load‐balancing strategies. The testbed uses a general architecture which allows different load‐balancing approaches to be supported easily. It emulates a typical World Wide Web scenario and allows variable load generation and performance measurement. We have performed some preliminary experiments to measure the performance of a few policies for load balancing using this testbed. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

17.
在均衡集群中请求的分配和选择服务是影响服务器集群性能的关键。本文基于随机高级Petri网(SHLPN)模型提出了动态反馈请求负载分配算法和加权队列选择的综合均衡调度方案。根据集群中各服务器实体的实时负载状况动态分配请求并结合请求权值实施服务,提高了系统的负载均衡能力。  相似文献   

18.
LinuxDirector: A connection director for scalable internet services   总被引:6,自引:0,他引:6       下载免费PDF全文
LinuxDirector is a connection director that supports load balancing among multiple Internet servers,which can be used to build scalable Internet services based on clusters of servers.LinuxDirector extends the TCP/IP stack of Linux Kernel to support three IP load balancing techniques,VS/NAT,VS/TUN and VS/DR.Four scheduling algorithms have been implemented to assign connections to different servers.Scalability is achieved by transparently adding or removing a node in the cluster.High availability is provided by detecting node or daemon failure and reconfiguring the system appropriately.This paper describes the design and implementation of LinuxDirector and presents serveral of its features including scalability,high availability and connection affinity.  相似文献   

19.
Distributed Denial of Service (DDoS) attacks generate flooding traffic from multiple sources towards selected nodes. Diluted low rate attacks lead to graceful degradation while concentrated high rate attacks leave the network functionally unstable. Previous approaches to such attacks have reached to a level where survivable systems effort to mitigate the effects of these attacks. However, even with such reactive mitigation approaches in place, network under DDoS attack becomes unstable and legitimate users in the network suffer in terms of increased response times and frequent network failures. Moreover, the Internet is dynamic in nature and the topic of automated responses to attacks has not received much attention.In this paper, we propose a proactive approach to DDoS in form of integrated auto-responsive framework that aims to restrict attack flow reach target and maintain stable network functionality even under attacked network. It combines detection and characterization with attack isolation and mitigation to recover networks from DDoS attacks. As first line of defense, our method uses high level specifications of entropy variations for legitimate interactions between clients and servers. The network generates optimized entropic detectors that monitor the behavior of flows to identify significant deviations. As the second line of defense, malicious flows are identified and directed to isolated zone of honeypots where they cannot cause any further damage to the network and legitimate flows are directed to a randomly selected server from pool of replicated servers. This approach leads the attacker to believe that they are succeeding in their attack, whereas in reality they are simply wasting time and resources.Service replication and attack isolation alone are not sufficient to mitigate the attacks. Limited network resources must be judiciously used when an attack is underway. Further, as third line of defense, we propose a Dynamic Honeypot Engine (DHE) modeled as a part of Honeypot Controller (HC) module that triggers the automatic generation of adequate nodes to service client requests and required number of honeypots that interact with attackers in contained manner. This load balancing in the network makes it attack tolerant. Legitimate clients, depending upon their trust levels built according to their monitored statistics, can track the actual servers for certain time period. Attack flows reaching honeypots are logged by Honeypot Data Repository (HDR). Most severe flows are punished by starting honeypot back propagation sessions and filtering them at the source as the last line of defense. The data collected on honeypots are used to isolate and filter present attack, if any and as an insight into future attack trends. The judicious mixture and self organization of servers and honeypots at different time intervals also guaranties promised QoS.We present the exhaustive parametric dependencies at various phases of attack and their regulation in real time to make the service network DDoS attack tolerant and insensitive to attack load. Results show that this auto-responsive network has the potential to maintain stable network functionality and guaranteed QoS even under attacks. It can be fine tuned according to the dynamically changing network conditions. We validate the effectiveness of the approach with analytical modeling on Internet type topology and simulation in ns-2 on a Linux platform.  相似文献   

20.
Cloud computing is an innovative computing paradigm designed to provide a flexible and low-cost way to deliver information technology services on demand over the Internet. Proper scheduling and load balancing of the resources are required for the efficient operations in the distributed cloud environment. Since cloud computing is growing rapidly and customers are demanding better performance and more services, scheduling and load balancing of the cloud resources have become very interesting and important area of research. As more and more consumers assign their tasks to cloud, service-level agreements (SLAs) between consumers and providers are emerging as an important aspect. The proposed prediction model is based on the past usage pattern and aims to provide optimal resource management without the violations of the agreed service-level conditions in cloud data centers. It considers SLA in both the initial scheduling stage and in the load balancing stage, and it looks into different objectives to achieve the minimum makespan, the minimum degree of imbalance, and the minimum number of SLA violations. The experimental results show the effectiveness of the proposed system compared with other state-of-the-art algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号