首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
As large data centers emerge, which host multiple Web applications, it is critical to isolate different application environments for security reasons and to provision shared resources effectively and efficiently to meet different service quality targets at minimum operational cost. To address this problem, we developed a novel architecture of resource management framework for multi-tier applications based on virtualization mechanisms. Key techniques presented in this paper include (1) establishment of the analytic performance model which employs probabilistic analysis and overload management to deal with non-equilibrium states; (2) a general formulation of the resource management problem which can be solved by incorporating both deterministic and stochastic optimizing algorithms; (3) deployment of virtual servers to partition resource at a much finer level; and (4) investigation of the impact of the failure rate to examine the effect of application isolation. Simulation experiments comparing three resource allocation schemes demonstrate the advantage of our dynamic approach in providing differentiated service qualities, preserving QoS levels in failure scenarios and also improving the overall performance while reducing the resource usage cost.  相似文献   

2.
Web-facing applications are expected to provide certain performance guarantees despite dynamic and continuous workload changes. As a result, application owners are using cloud computing as it offers the ability to dynamically provision computing resources (e.g., memory, CPU) in response to changes in workload demands to meet performance targets and eliminates upfront costs. Horizontal, vertical, and the combination of the two are the possible dimensions that cloud application can be scaled in terms of the allocated resources. In vertical elasticity as the focus of this work, the size of virtual machines (VMs) can be adjusted in terms of allocated computing resources according to the runtime workload. A commonly used vertical resource elasticity approach is realized by deciding based on resource utilization, named capacity-based. While a new trend is to use the application performance as a decision making criterion, and such an approach is named performance-based. This paper discusses these two approaches and proposes a novel hybrid elasticity approach that takes into account both the application performance and the resource utilization to leverage the benefits of both approaches. The proposed approach is used in realizing vertical elasticity of memory (named as vertical memory elasticity), where the allocated memory of the VM is auto-scaled at runtime. To this aim, we use control theory to synthesize a feedback controller that meets the application performance constraints by auto-scaling the allocated memory, i.e., applying vertical memory elasticity. Different from the existing vertical resource elasticity approaches, the novelty of our work lies in utilizing both the memory utilization and application response time as decision making criteria. To verify the resource efficiency and the ability of the controller in handling unexpected workloads, we have implemented the controller on top of the Xen hypervisor and performed a series of experiments using the RUBBoS interactive benchmark application, under synthetic and real workloads including Wikipedia and FIFA. The results reveal that the hybrid controller meets the application performance target with better performance stability (i.e., lower standard deviation of response time), while achieving a high memory utilization (close to 83%), and allocating less memory compared to all other baseline controllers.  相似文献   

3.
Dynamic resource provisioning is a challenging technique to meet the service level agreement (SLA) requirements of multi-tier applications in virtualization-based cloud computing. Prior efforts have addressed this challenge based on either a cost-oblivious approach or a cost-aware approach. However, both approaches may suffer frequent SLA violations under flash crowd conditions. Because they ignore the benefit gained that a multi-tier application continuously guarantees the SLA in the new configuration. In this paper, we propose a benefit-aware approach with feedback control theory to solve this problem. Experimental results based on live workload traces show that our approach can reduce resource provisioning cost by as much as 30% compared with a costoblivious approach, and can effectively reduce SLA violations compared with a cost-aware approach.  相似文献   

4.
With the single-instance multitenancy (SIMT) model for composite Software-as-a-Service (SaaS) applications, a single composite application instance can host multiple tenants, yielding the benefits of better service and resource utilization and reduced operational cost for the SaaS provider. An SIMT application needs to share services and their aggregation (the application) among its tenants while supporting variations in the functional and performance requirements of the tenants. The SaaS provider requires a middleware environment that can deploy, enact, and manage a designed SIMT application, to achieve the varied requirements of the different tenants in a controlled manner. This paper presents the SDSN@RT (software-defined service networks at runtime) middleware environment that can meet the aforementioned requirements. SDSN@RT represents an SIMT composite cloud application as a multitenant service network, where the same service network simultaneously hosts a set of virtual service networks, one for each tenant. A service network connects a set of services and coordinates the interactions between them. A virtual service network realizes the requirements for a specific tenant and can be deployed, configured, and logically isolated in the service network at runtime. SDSN@RT also supports the monitoring and runtime changes of the deployed multitenant service networks. We show the feasibility of SDSN@RT with a prototype implementation and demonstrate its capabilities to host SIMT applications and support their changes with a case study. The performance study of the prototype implementation shows that the runtime capabilities of our middleware incur little overhead.  相似文献   

5.
Multiple Internet applications are often hosted in one datacenter, sharing underlying virtualized server resources. It is important to provide differentiated treatment to co-hosted applications and to improve overall system performance by efficient use of shared resources. Challenges arise due to multi-tier service architecture, virtualized server infrastructure, and highly dynamic and bursty workloads. We propose a coordinated admission control and adaptive resource provisioning approach for multi-tier service differentiation and performance improvement in a shared virtualized platform. We develop new model-independent reinforcement learning based techniques for virtual machine (VM) auto-configuration and session based admission control. Adaptive VM auto-configuration provides proportional service differentiation between co-located applications and improves application response time simultaneously. Admission control improves session throughput of the applications and minimizes resource wastage due to aborted sessions. A shared reward actualizes coordination between the two learning modules. For system agility and scalability, we integrate the reinforcement learning approach with cascade neural networks. We have implemented the integrated approach in a virtualized blade server system hosting RUBiS benchmark applications. Experimental results demonstrate that the new approach meets differentiation targets accurately and achieves performance improvement of applications at the same time. It reacts to dynamic and bursty workloads in an agile and scalable manner.  相似文献   

6.
Use of virtualization in Infrastructure as a Service (IaaS) environments provides benefits to both users and providers: users can make use of resources following a pay-per-use model and negotiate performance guarantees, whereas providers can provide quick, scalable and hardware-fault tolerant service and also utilize resources efficiently and economically. With increased acceptance of virtualization-based systems, an important issue is that of virtual machine migration-enabled consolidation and dynamic resource provisioning. Effective resource provisioning can result in higher gains for users and providers alike. Most hosted applications (for example, web services) are multi-tiered and can benefit from their various tiers being hosted on different virtual machines. These mutually communicating virtual machines may get colocated on the same physical machine or placed on different machines, as part of consolidation and flexible provisioning strategies. In this work, we argue the need for network affinity-awareness in resource provisioning for virtual machines. First, we empirically quantify the change in CPU resource usage due to colocation or dispersion of communicating virtual machines for both Xen and KVM virtualization technologies. Next, we build models based on these empirical measurements to predict the change in CPU utilization when transitioning between colocated and dispersed placements. Due to the modeling process being independent of virtualization technology and specific applications, the resultant model is generic and application-agnostic. Via extensive experimentation, we evaluate the applicability of our models for synthetic and benchmark application workloads. We find that the models have high prediction accuracy — maximum prediction error within 2% absolute CPU usage.  相似文献   

7.
云平台数据中心主机与负载均具有异构性,导致任务负载无法均衡利用主机各项资源。主机资源的非均衡利用最终造成总体资源利用率低,主机资源浪费,提高运营成本。针对云平台数据中心任务分配中各项资源无法均衡利用的问题,提出一种基于连续双向拍卖的虚拟机分配与迁移算法。该算法一方面利用多种启发式策略对数据中心主机和虚拟机进行筛选,将过载主机与欠载主机放入数据中心拍卖中;另一方面,构建买卖双方定价策略以及交易策略,形成完整的拍卖流程。同时,为解决多资源情况下的交易问题,提出基于资源匹配度的交易策略。仿真实验表明,文中方法通过引入资源匹配度,能够有效地匹配数据中心主机与虚拟机的各项资源,平衡各类资源利用率,提高整体资源利用率。  相似文献   

8.
杨劲  庞建民  齐宁  刘睿 《计算机科学》2017,44(3):73-78, 117
由于部署方便、维护简单并且不需要搭建自己的私有机房,云数据中心正成为大多数互联网公司 尤其是初创公司和中小规模公司 部署应用程序的首选。在以基础设施为服务的云环境里,互联网公司可以根据应用程序的需要动态租赁云基础设施,从而节省预算开支,并保证应用性能。然而,在现有的业界实践中,云服务提供商提供的负载均衡和资源伸缩服务只能监控虚拟机的使用状态,并不能监控应用程序的运行状态,因此无法准确根据应用程序的服务需求自适应变换资源规模。同时,现有的文献和实践中,也很少有 研究从云基础设施使用者的角度出发,为使用者节省基础设施租赁费用或高效使用已租赁基础设施。据此提出了一种面向基础设施云环境下多层应用的费用高效的资源管理方法,其在降低用户费用的同时,还能充分利用所花费用提高应用程序性能。通过仿真对所提方法业界实际使用的方法 进行比较,结果表明所提方法不仅能够提高应用程序的服务质量和服务性能,也能较大地降低公司在基础设施租赁方面的费用。  相似文献   

9.
该文以政府机构网站群建设为对象,搭建了网站群云计算构建的实验与应用平台,阐述了应用云计算支撑政府机构网站群架构及其安全性设计和应用方法i,该文主要从两方面满足网站群应用需求,一方面,以IaaS(Infrastructure as a Service)的模式搭建支撑网站群应用的云计算基础平台,以SaaS(Software as a Service)的模式搭建网站群内容管理云计算平台,另一方面,通过构建公共云和私有云环境,保障网站群系统安全和信息安全。以云计算为平台,向用户提供一站式的网站群建设、运行、监控服务,将成为政府网站群建设的新的方向.  相似文献   

10.
As green computing is becoming a popular computing paradigm, the performance of energy-efficient data center becomes increasingly important. This paper proposes power-aware performance management via stochastic control method (PAPMSC), a novel stochastic control approach for virtualized web servers. It addresses the instability and inefficiency issues due to dynamic web workloads. It features a coordinated control architecture that optimizes the resource allocation and minimizes the overall power consumption while guaranteeing the service level agreements (SLAs). More specifically, due to the interference effect among the co-located virtualized web servers and time-varying workloads, the relationship between the hardware resource assignment to different virtual servers and the web applications’ performance is considered as a coupled Multi-Input-Multi-Output (MIMO) system and formulated as a robust optimization problem. We propose a constrained stochastic linear-quadratic controller (cSLQC) to solve the problem by minimizing the quadratic cost function subject to constraints on resource allocation and applications’ performance. Furthermore, a proportional controller is integrated to enhance system stability. In the second layer, we dynamically manipulate the physical frequency for power efficiency using an adaptive linear quadratic regulator (ALQR). Experiments on our testbed server with a variety of workload patterns demonstrate that the proposed control solution significantly outperforms existing solutions in terms of effectiveness and robustness.  相似文献   

11.
Resource allocation strategies in virtualized data centers have received considerable attention recently as they can have substantial impact on the energy efficiency of a data center. This led to new decision and control strategies with significant managerial impact for IT service providers. We focus on dynamic environments where virtual machines need to be allocated and deallocated to servers over time. Simple bin packing heuristics have been analyzed and used to place virtual machines upon arrival. However, these placement heuristics can lead to suboptimal server utilization, because they cannot consider virtual machines, which arrive in the future. We ran extensive lab experiments and simulations with different controllers and different workloads to understand which control strategies achieve high levels of energy efficiency in different workload environments. We found that combinations of placement controllers and periodic reallocations achieve the highest energy efficiency subject to predefined service levels. While the type of placement heuristic had little impact on the average server demand, the type of virtual machine resource demand estimator used for the placement decisions had a significant impact on the overall energy efficiency.  相似文献   

12.
Elasticity (on-demand scaling) of applications is one of the most important features of cloud computing. This elasticity is the ability to adaptively scale resources up and down in order to meet varying application demands. To date, most existing scaling techniques can maintain applications’ Quality of Service (QoS) but do not adequately address issues relating to minimizing the costs of using the service. In this paper, we propose an elastic scaling approach that makes use of cost-aware criteria to detect and analyse the bottlenecks within multi-tier cloud-based applications. We present an adaptive scaling algorithm that reduces the costs incurred by users of cloud infrastructure services, allowing them to scale their applications only at bottleneck tiers, and present the design of an intelligent platform that automates the scaling process. Our approach is generic for a wide class of multi-tier applications, and we demonstrate its effectiveness against other approaches by studying the behaviour of an example e-commerce application using a standard workload benchmark.  相似文献   

13.
Finding effective ways to collect the usage of network resources in all kinds of applications to ensure a distributed control plane has become a key requirement to improve the controller’s decision making performance. This paper explores an efficient way in combining dynamic NetView sharing of distributed controllers with the behavior of intra-service resource announcements and processing requirements that occur in distributed controllers, and proposes a rapid multipathing distribution mechanism. Firstly, we establish a resource collecting model and prove that the prisoner’s dilemma problem exists in the distributed resource collecting process in the Software-defined Network (SDN). Secondly, we present a bypass path selection algorithm and a diffluence algorithm based on Q-learning to settle the above dilemma. At last, simulation results are given to prove that the proposed approach is competent to improve the resource collecting efficiency by the mechanism of self-adaptive path transmission ratio of our approach, which can ensure high utilization of the total network we set up.  相似文献   

14.
Cloud computing allows execution and deployment of different types of applications such as interactive databases or web-based services which require distinctive types of resources. These applications lease cloud resources for a considerably long period and usually occupy various resources to maintain a high quality of service (QoS) factor. On the other hand, general big data batch processing workloads are less QoS-sensitive and require massively parallel cloud resources for short period. Despite the elasticity feature of cloud computing, fine-scale characteristics of cloud-based applications may cause temporal low resource utilization in the cloud computing systems, while process-intensive highly utilized workload suffers from performance issues. Therefore, ability of utilization efficient scheduling of heterogeneous workload is one challenging issue for cloud owners. In this paper, addressing the heterogeneity issue impact on low utilization of cloud computing system, conjunct resource allocation scheme of cloud applications and processing jobs is presented to enhance the cloud utilization. The main idea behind this paper is to apply processing jobs and cloud applications jointly in a preemptive way. However, utilization efficient resource allocation requires exact modeling of workloads. So, first, a novel methodology to model the processing jobs and other cloud applications is proposed. Such jobs are modeled as a collection of parallel and sequential tasks in a Markovian process. This enables us to analyze and calculate the efficient resources required to serve the tasks. The next step makes use of the proposed model to develop a preemptive scheduling algorithm for the processing jobs in order to improve resource utilization and its associated costs in the cloud computing system. Accordingly, a preemption-based resource allocation architecture is proposed to effectively and efficiently utilize the idle reserved resources for the processing jobs in the cloud paradigms. Then, performance metrics such as service time for the processing jobs are investigated. The accuracy of the proposed analytical model and scheduling analysis is verified through simulations and experimental results. The simulation and experimental results also shed light on the achievable QoS level for the preemptively allocated processing jobs.  相似文献   

15.
随着云计算的流行和发展,越来越多的应用系统被部署在云服务器上,以可伸缩的方式按需获取虚拟资源并按使用量付费.因此,如何在保证应用系统优化运行的同时以一种考虑成本效益的方式来分配和使用虚拟资源就成了一个重要的研究问题.传统的手工调整方法不但会增加系统管理员的负担,而且准确性较差并有一定的延迟.现有的虚拟资源动态分配方法大多是在发现系统运行时质量问题后进行资源动态调整的,因此具有一定的延迟性,而且还忽略了虚拟资源的异构性带来的影响.针对这些问题,提出了一种基于控制理论的虚拟资源动态分配方法.该方法使用一个前馈控制器来动态调整虚拟资源的数量,同时使用一个反馈控制器来动态调节各个虚拟资源处理的负载比例,从而实现应用系统的优化运行和虚拟资源的有效利用.与静态虚拟资源分配方法以及仅包括前馈控制方法的对比实验表明,所提出的方法能够在保证应用系统优化运行的同时提高虚拟资源利用的有效性.  相似文献   

16.
网络虚拟化技术的提出,为解决互联网"僵化"问题找到了新的思路,受到广泛的关注。在虚拟路由器平台中,若干台互联的网络服务器资源组成了底层物理网络,通过虚拟网络映射技术,将物理网络资源有效地映射到虚拟网络设备上,组成多个虚拟网络,满足用户对网络的多样化需求。虚拟路由器资源映射问题是虚拟网络映射问题的基础,虚拟路由器实例与物理资源的映射方法决定了虚拟网络平台资源的利用率和虚拟网络系统的性能。针对虚拟路由器平台资源分配的问题,提出了物理网络资源模型和虚拟路由器资源请求模型,设计了一种启发式虚拟路由资源分配算法,并对算法的复杂性和优化目标进行了分析。  相似文献   

17.
In software-defined networking (SDN), the communication between controllers and switches is very important, for switch can only work by relying on flow tables received from its controller. Therefore, how to ensure the reliability of the communication between controllers and switches is a key problem in SDN. In this paper, we study this problem from two aspects: the controller placement and the resource backup aspect. Firstly, in order to implement the reliable communication and meet the required propagation delay between controllers and switches, a min-cover based controller placement approach is proposed. Then, in order to protect both controllers and control paths from regional failure, a backup method based on an exponential decay failure model is proposed, which considers the regional influence and the survivability of backup controllers and control paths. Simulations show that our controller placement approach can meet the reliability and delay requirement with appropriate controller allocation scheme, and our backup method can improve the survivability of backup controllers and control paths while ensuring the performance of control network.  相似文献   

18.
本文针对云平台按负载峰值需求配置处理机资源、提供单一的服务应用和资源需求动态变化导致资源利用率低下的问题,采用云虚拟机中心来同时提供多种服务应用.利用灰色波形预测算法对未来时间段内到达虚拟机的服务请求量进行预测,给出兼顾资源需求和服务优先等级的虚拟机服务效用函数,以最大化物理机的服务效用值为目标,为物理机内的各虚拟机动态配置物理资源.通过同类虚拟机间的全局负载均衡和多次物理机内各虚拟机的物理资源再分配,进一步增加服务请求量较大的相应类型的虚拟机的物理资源分配量.最后,给出了虚拟机中心基于灰色波形预测的按需资源分配算法ODRGWF.模拟实验表明所提算法能够有效提高云平台中处理机的资源利用率,对提高用户请求完成率以及服务质量都具有实际意义.  相似文献   

19.
Mobile devices are more and more popular in recent years. As a result, there''re huge requests of mobile applications, especially those integrated with multiple information. However, on one hand, most of the mobile applications at present just contain some certain kinds of information and they cannot adapt to the rapid change of users'' requirements, either. On the other hand, to build these applications, it''s usually time consuming and there are not enough resource components with programmable interfaces. In this paper, we propose an approach based on Internerware to building web page integration applications for mobile device. We introduce a framework that provides abundant internet-programmable interfaces, a flexible integration mechanism to meet the users'' rapid changing requirements and a reliable mechanism that guarantees the quality of the referred resources effectively. With this framework, we can rapidly build an application that integrates all the information according to users'' requirement.  相似文献   

20.
面向服务架构(SOA)的业务负载震荡幅度较大,资源需求具有动态变化、按需索用的特点。为此,分析SOA与虚拟化的主要技术,在此基础上提出一种SOA资源保障模型,包括服务应用层、资源匹配层、逻辑虚拟层和物理资源层。实验结果表明,该模型较好地解决了用户级服务应用与底层物理资源的优化适配问题,并且能提高底层资源的利用率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号