首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
面向数据的体系架构(DOA)为海量异构数据流通共享提供了新的有效解决方案。而数据注册中心(DRC)作为DOA的核心部件,它的访问性能尤为关键。针对高并发访问带来的DRC集群服务过载问题,采用Nginx反向代理负载均衡技术处理高并发访问。对Nginx的负载策略进行分析优化,提出一种由动态配置、负载收集、算法调度组成的动态负载均衡策略,并在负载调度模块对Nginx加权最小连接调度算法(WLC)进行改进,通过自适应权值不断调度下一个周期内性能最优的节点来处理请求。通过高并发性能测试验证了所提出的负载均衡策略在DRC集群中能更有效处理大流量的访问需求,提高集群的资源利用率和缩短请求响应时间。  相似文献   

2.
在Web cache集群中,Web突发请求的频繁出现引发资源供给不足,造成系统性能显著下降.为有效处理Web突发请求,构建了同时使用本地资源和云资源的弹性Web cache集群.在弹性Web cache集群中,为提升系统性能,降低费用,提出一种自适应的负载模型.该模型可以动态自适应地调整,能够有效适用于异构Web cache集群.考虑到云结点的网络延迟,修正该模型得到云结点负载模型.基于以上负载模型,构造弹性Web cache集群的自适应负载均衡策略.与其他负载均衡策略相比较,使用该自适应负载均衡策略能够在弹性Web cache集群中实现高效的负载均衡.  相似文献   

3.
随着电商网站用户规模不断增长,高并发问题成为在搭建大规模电商网站系统时面临的一项重大挑战,通过负载均衡算法来实现Web服务集群中各节点均衡负载是解决高并发的手段之一.然而,目前通用的负载均衡算法都存在一些不足之处,针对这一问题,提出了一种动态自适应权重轮询随机负载均衡算法(Dynamic Adaptive Weight Round-Robin Random Load-Balancing,DAWRRRLB),该算法考虑到影响Web服务集群中服务器节点性能的多重因素,根据节点在运行过程中的实时负载情况动态的改变集群中节点的负载性能,并结合改进的Pick-K算法对权重轮询负载均衡算法进行优化,始终保证性能最优的服务器节点在提供服务.通过多次实验对比,改进的DAWRRRLB算法可以有效的提高负载均衡效率.  相似文献   

4.
陈刚  郭玉东  魏小锋 《计算机应用》2017,37(12):3442-3446
Web服务器广泛部署在以Docker容器为代表的云计算平台上,面临着严峻的安全挑战。为了提高此类Web服务器的安全防御能力,提出一种基于Linux名字空间的Web服务器动态防御方法。该方法能够保证在Web服务正常工作的前提下,首先使用名字空间构建Web服务器运行环境;其次,通过多环境的交替运行来实现Web服务器的动态变换以迷惑入侵者,增加入侵者对Web服务器的攻击难度;最后,通过定期主动清除并重建Web服务器的运行环境来消除入侵行为对Web服务器的影响,最终实现有效提高Web服务器的动态防御能力。实验结果表明,所提方法能够有效增强Web服务器的安全性,同时对系统性能影响很小,请求响应100 KB数据的时间损耗为0.02~0.07 ms。  相似文献   

5.
针对目前通用的Web服务器上未实现有效的QoS控制的现状以及Web QoS控制灵活性不够、通用性不强、可扩展性不好等缺点,提出了一种基于请求目标分类的Web QoS动态控制模型,采用以控制响应时间为中心的动态控制策略,对HTTP请求进行基于目标分类的动态接纳控制以及动态的重配置处理。实验结果表明,该方法可以显著减少系统响应时间,并保证在高负载下的吞吐量的平稳性。  相似文献   

6.
Internet上Web应用日益广泛的使用,使得Web服务器需要在高负载下提供性能保证与区分服务,以满足用户的不同需求。响应延迟是评价Web服务器的一项关键性能指标,而成比例延迟区分服务是一种重要的区分服务模型。针对Apache Web服务器,提出并实现了基于自适应控制的成比例延迟区分服务。在每个采样周期,自适应控制器根据预设的延迟区分参数,通过动态计算并调节各个客户类别的服务线程数目,可保证Apache Web服务器上高优先级客户具有较低的平均连接延迟,而各个客户类别的平均延迟比保持不变。仿真结果表明,在动态变化的负载、参考输入以及不同的系统配置之下,控制器作用下的Apache Web服务器都能可靠地提供成比例延迟区分服务。  相似文献   

7.
杨雷  代钰  张斌  王昊 《计算机科学》2015,42(1):47-49
多层Web应用性能分析是实现资源动态分配和管理、保证多层Web应用性能的重要因素之一.传统的多层Web应用性能分析模型往往假设服务器部署在无性能互扰的服务器环境中且忽略了逻辑资源服务能力对多层Web应用性能的影响.随着云计算的发展,底层物理资源可以通过虚拟化方式形成虚拟资源并向外提供服务,这为多层Web应用的性能保证提供了有效支撑.因此,如何考虑虚拟机性能互扰以及逻辑资源服务能力对多层Web应用性能的影响已经成为云计算环境中多层Web应用性能分析所需解决的关键问题.为此,构建了一个基于排队网的多层Web应用性能分析模型,该模型通过丢弃队列来对目前多层Web应用性能分析模型在并发数限制方面进行扩展,在考虑虚拟机间性能互扰的基础上,提出了多层Web应用性能分析模型参数求解方法.实验结果验证了所提出的多层Web应用性能分析模型的有效性.  相似文献   

8.
在Web应用日益普及的今天,Web服务器承担的任务越来越繁重,特别是进行大数据量的计算时,服务器的硬件资源成了计算效率的瓶颈。为了解决这一问题,结合云计算和中间件的技术特点,提出了一种基于多Agent的云服务中间件的体系架构,该框架分为用户接口层、SOA层和资源管理层3个层次,内部采用多Agent技术。节点间通过ACL消息进行通信,负载均衡采用静态计算能力与动态负载相结合,内置的日志服务和容错服务保障系统稳定运行。将此框架应用到实际Web应用中可以大大提高Web服务器的计算效率。  相似文献   

9.
张勇  黄涛  陈宁江  金蓓弘 《软件学报》2007,18(7):1660-1671
动态多层Web系统在运行时会受到许多不确定性因素的影响.同时,在不同的负载模式下具有不同的性能特性,需要不同的性能模型进行描述.为消除不确定性因素对系统性能的影响,基于反馈控制原理设计的性能保障机制主要采用单一、固定的系统性能模型,对动态Web系统变化的性能特征考虑不够.在负载呈波动且具有不可预测特性的Internet环境中,这会降低性能目标的精确性和稳定性.采用自适应控制的思想,以满足请求平均响应时间为目标,提出了一种基于在线评估系统性能模型的保障机制,并采用两个不同类型的事务性Web测试基准,对所提方法进行了评价.结果表明,该方法能够有效减轻变化负载模式下响应时间与预期目标的偏离程度.  相似文献   

10.
请求负载的增加常常导致Web服务器系统性能降低,用户期望的服务质量得不到保证,这是服务级Web系统所面临和必须解决的问题。文章提出了一种Web服务器集群环境下的负载分配策略,通过对用户请求分类、将不同类别的请求进行响应性能隔离、优先为高级别请求提供服务以及请求许可控制等手段,对不同类别的Web请求提供不同质量的服务,保证了服务级用户的服务质量。同时采用最迟分配原则,改善系统的负载均衡能力,缩短系统平均响应时间。最后通过仿真实验,验证了该策略的正确性和有效性。  相似文献   

11.
As green computing is becoming a popular computing paradigm, the performance of energy-efficient data center becomes increasingly important. This paper proposes power-aware performance management via stochastic control method (PAPMSC), a novel stochastic control approach for virtualized web servers. It addresses the instability and inefficiency issues due to dynamic web workloads. It features a coordinated control architecture that optimizes the resource allocation and minimizes the overall power consumption while guaranteeing the service level agreements (SLAs). More specifically, due to the interference effect among the co-located virtualized web servers and time-varying workloads, the relationship between the hardware resource assignment to different virtual servers and the web applications’ performance is considered as a coupled Multi-Input-Multi-Output (MIMO) system and formulated as a robust optimization problem. We propose a constrained stochastic linear-quadratic controller (cSLQC) to solve the problem by minimizing the quadratic cost function subject to constraints on resource allocation and applications’ performance. Furthermore, a proportional controller is integrated to enhance system stability. In the second layer, we dynamically manipulate the physical frequency for power efficiency using an adaptive linear quadratic regulator (ALQR). Experiments on our testbed server with a variety of workload patterns demonstrate that the proposed control solution significantly outperforms existing solutions in terms of effectiveness and robustness.  相似文献   

12.
Designing eco-friendly system has been at the forefront of computing research. Faced with a growing concern about the server energy expenditure and the climate change, both industry and academia start to show high interest in computing systems powered by renewable energy sources. Existing proposals on this issue mainly focus on optimizing resource utilization or workload performance. The key supporting hardware structures for cross-layer power management and emergency handling mechanisms are often left unexplored. This paper presents GreenPod, a research framework for exploring scalable and dependable renewable power management in datacenters. An important feature of GreenPod is that it enables joint management of server power supplies and virtualized server workloads. Its interactive communication portal between servers and power supplies allows dataeenter operators to perform real-time renewable energy driven load migration and power emergency handling. Based on our system prototype, we discuss an important topic: virtual machine (VM) workloads survival when facing extended utility outage and insufficient onsite renewable power budget. We show that whether a VM can survive depends on the operating frequencies and workload characteristics. The proposed framework can greatly encourage and facilitate innovative research in dependable green computing.  相似文献   

13.
Multiple Internet applications are often hosted in one datacenter, sharing underlying virtualized server resources. It is important to provide differentiated treatment to co-hosted applications and to improve overall system performance by efficient use of shared resources. Challenges arise due to multi-tier service architecture, virtualized server infrastructure, and highly dynamic and bursty workloads. We propose a coordinated admission control and adaptive resource provisioning approach for multi-tier service differentiation and performance improvement in a shared virtualized platform. We develop new model-independent reinforcement learning based techniques for virtual machine (VM) auto-configuration and session based admission control. Adaptive VM auto-configuration provides proportional service differentiation between co-located applications and improves application response time simultaneously. Admission control improves session throughput of the applications and minimizes resource wastage due to aborted sessions. A shared reward actualizes coordination between the two learning modules. For system agility and scalability, we integrate the reinforcement learning approach with cascade neural networks. We have implemented the integrated approach in a virtualized blade server system hosting RUBiS benchmark applications. Experimental results demonstrate that the new approach meets differentiation targets accurately and achieves performance improvement of applications at the same time. It reacts to dynamic and bursty workloads in an agile and scalable manner.  相似文献   

14.
当前云计算供应商通过定价算法或类似拍卖的算法来分配他们的虚拟机(VM)实例。然而,这些算法大多要求虚拟机静态供应,无法准确预测用户需求,导致资源未得到充分利用。为此,提出了一种基于组合拍卖的虚拟机动态供应和分配算法,在做出虚拟机供应决策时考虑用户对虚拟机的需求。该算法将可用的计算资源看成是“流体”资源,且这些资源根据用户请求可分为不同数量、不同类型的虚拟机实例。然后可根据用户的估价决定分配策略,直到所有资源分配完毕。基于Parallel Workload Archive(并行工作负载存档)的真实工作负载数据进行了仿真实验,结果表明该方法可保证为云供应商带来更高收入,提高资源利用率。  相似文献   

15.
With cloud and utility computing models gaining significant momentum, data centers are increasingly employing virtualization and consolidation as a means to support a large number of disparate applications running simultaneously on a chip-multiprocessor (CMP) server. In such environments, contention for shared platform resources (CPU cores, shared cache space, shared memory bandwidth, etc.) can have a significant effect on each virtual machine’s performance. In this paper, we investigate the shared resource contention problem for virtual machines by: (a) measuring the effects of shared platform resources on virtual machine performance, (b) proposing a model for estimating shared resource contention effects, and (c) proposing a transition from a virtual machine (VM) to a virtual platform architecture (VPA) that enables transparent shared resource management through architectural mechanisms for monitoring and enforcement. Our measurement and modeling experiments are based on a consolidation benchmark (vConsolidate) running on a state-of-the-art CMP server. Our virtual platform architecture experiments are based on detailed simulations of consolidation scenarios. Through detailed measurements and simulations, we show that shared resource contention affects virtual machine performance significantly and emphasize that virtual platform architectures is a must for future virtualized datacenters.  相似文献   

16.
The growth in computer and networking technologies over the past decades established cloud computing as a new paradigm in information technology. The cloud computing promises to deliver cost‐effective services by running workloads in a large scale data center consisting of thousands of virtualized servers. The main challenge with a cloud platform is its unpredictable performance. A possible solution to this challenge could be load balancing mechanism that aims to distribute the workload across the servers of the data center effectively. In this paper, we present a distributed and scalable load balancing mechanism for cloud computing using game theory. The mechanism is self‐organized and depends only on the local information for the load balancing. We proved that our mechanism converges and its inefficiency is bounded. Simulation results show that the generated placement of workload on servers provides an efficient, scalable, and reliable load balancing scheme for the cloud data center. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
Cloud computing is emerging as an increasingly important service-oriented computing paradigm. Management is a key to providing accurate service availability and performance data, as well as enabling real-time provisioning that automatically provides the capacity needed to meet service demands. In this paper, we present a unified reinforcement learning approach, namely URL, to automate the configuration processes of virtualized machines and appliances running in the virtual machines. The approach lends itself to the application of real-time autoconfiguration of clouds. It also makes it possible to adapt the VM resource budget and appliance parameter settings to the cloud dynamics and the changing workload to provide service quality assurance. In particular, the approach has the flexibility to make a good trade-off between system-wide utilization objectives and appliance-specific SLA optimization goals. Experimental results on Xen VMs with various workloads demonstrate the effectiveness of the approach. It can drive the system into an optimal or near-optimal configuration setting in a few trial-and-error iterations.  相似文献   

18.
Yi Wei  M. Brian Blake 《Computing》2016,98(5):523-538
A Cloud platform offers on-demand provisioning of virtualized resources and pay-per-use charge model to its hosted services to satisfy their fluctuating resource needs. Resource scaling in cloud is often carried out by specifying static rules or thresholds. As business processes and scientific jobs become more intricate and involve more components, traditional reactive or rule-based resource management methods are not able to meet the new requirements. In this paper, we extend our previous work on dynamically managing virtualized resources for service workflows in a cloud environment. Extensive experimental results of an adaptive resource management algorithm are reported. The algorithm makes resource management decisions based on predictive results and high level user specified thresholds. It is also able to coordinate resources among the component services of a workflow so that unnecessary resource allocations and terminations can be avoided. Based on observations from previous experiments, the algorithm is extended with a new resource merge strategy in order to prevent average resource size from shrinking. Simulation results from synthetic workload data demonstrated the effectiveness of the extension.  相似文献   

19.
Cloud computing and virtualization technology have revolutionized general-purpose computing applications in the past decade. The cloud paradigm offers advantages through reduction of operation costs, server consolidation, flexible system configuration and elastic resource provisioning. However, despite the success of cloud computing for general-purpose computing, existing cloud computing and virtualization technology face tremendous challenges in supporting emerging soft real-time applications such as online video streaming, cloud-based gaming, and telecommunication management. These applications demand real-time performance in open, shared and virtualized computing environments. This paper identifies the technical challenges in supporting real-time applications in the cloud, surveys recent advancement in real-time virtualization and cloud computing technology, and offers research directions to enable cloud-based real-time applications in the future.  相似文献   

20.
Hypervisors enable cloud computing model to provide scalable infrastructures and on-demand access to computing resources as they support multiple operating systems to run on one physical server concurrently. This mechanism enhances utilization of physical server thus reduces server count in the data center. Hypervisors also drive the benefits of reduced IT infrastructure setup and maintenance costs along with power savings. It is interesting to know different hypervisors’ performance for the consolidated application workloads. Three hypervisors ESXi, XenServer, and KVM are carefully chosen to represent three categories full virtualized, para-virtualized, and hybrid virtualized respectively for the experiment. We have created a private cloud using CloudStack. Hypervisors are deployed as hosts in the private cloud in the respective clusters. Each hypervisor is deployed with three virtual machines. Three applications web server, application server, and database servers are installed on three virtual machines. Experiments are designed using Design of Experiments (DOE) methodology. With concurrently running virtual machines, each hypervisor is stressed with the consolidated real-time workloads (web load, application load, and OLTP) and important system information is gathered using SIGAR framework. The paper proposes a new scoring formula for hypervisors’ performance in the private cloud for consolidated application workloads and the accuracy of the results are complemented with sound statistical analysis using DOE.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号