首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
优化虚拟机部署是降低云数据中心能耗的有效方法,但是,过度对虚拟机部署进行合并可能导致主机机架出现热点,影响数据中心提供服务的可靠性。提出一种基于能效和可靠性的虚拟机部署算法。综合考虑主机利用率、主机温度、主机功耗、冷却系统功耗和主机可靠性间的相互关系,建立确保主机可靠性的冗余模型。在主动避免机架热点情况下,实现动态的虚拟机部署决策,在降低数据中心总体能耗前提下,确保主机服务可靠性。仿真结果表明,该算法不仅可以节省更多能耗,避免热点主机,而且性能保障上也更好。  相似文献   

2.
何丽 《计算机应用》2014,34(8):2252-2255
针对云计算系统中资源利用率提高和系统能耗降低之间的协调问题,提出了一种新的基于灰色关联度的虚拟机分配方法,应用灰色关联度的基本理论建立了基于服务层协议(SLA)违背率、系统能耗和服务器负载评价函数的虚拟机分配决策模型,构造了基于灰色关联度的虚拟机分配算法,并在CloudSim仿真平台上进行了实验。实验结果表明,与传统的基于简单线性权重的多目标优化方法相比,在不同的虚拟机选择策略下,基于灰色关联度的虚拟机分配方法在系统能耗、SLA违背率和虚拟机迁移次数上平均降低〖BP(〗是提高吗?应该是降低吧?请明确一下。〖BP)〗了6.8%、5.2%和15.5%。因此,所提方法在不同的虚拟机选择策略下能够大幅度减少虚拟机迁移次数,较好地满足系统在能耗和SLA违背率上的优化需求。  相似文献   

3.
Significant savings in the energy consumption, without sacrificing service level agreement (SLA), are an excellent economic incentive for cloud providers. By applying efficient virtual Machine placement and consolidation algorithms, they are able to achieve these goals. In this paper, we propose a comprehensive technique for optimum energy consumption and SLA violation reduction. In the proposed approach, the issues of allocation and management of virtual machines are divided into smaller parts. In each part, new algorithms are proposed or existing algorithms have been improved. The proposed method performs all steps in distributed mode and acts in centralized mode only in the placement of virtual machines that require a global vision. For this purpose, the population-based or parallel simulated annealing (SA) algorithm is used in the Markov chain model for virtual machines placement policy. Simulation of algorithms in different scenarios in the CloudSim confirms better performance of the proposed comprehensive algorithm.  相似文献   

4.
Infrastructure-as-a-service (IaaS) is one of emerging powerful cloud computing services provided by IT industry at present. This paper considers the interaction aspects between on-demand requests and the allocation of virtual machines in a server farm operated by a specific infrastructure owner. We formulate an analytic performance model of the server farm taking into account the quality of service (QoS) guaranteed to users and the operational energy consumption in the server farm. We compare several scheduling algorithms from the aspect of the average energy consumption and heat emission of servers as well as the blocking probabilities of on-demand requests. Based on numerical results of a comparison of different allocation strategies, a saving on the energy consumption is possible in the operational range (where on-demand requests do not face unpleasant blocking probability) with the allocation of virtual machines to physical servers based on the priority.  相似文献   

5.

In cloud computing, the virtual machine placement is a critical process which aims to identify the most appropriate physical machine to host the virtual machine. It has a significant impact on the performance, resource usage and energy consumption of the datacenters. In order to reduce the number of active physical machines in a datacenter, several virtual machine placement schemes have already been designed and proposed. This study investigates how do four different methods compare to each other in terms of accuracy and efficiency for solving the virtual machine placement as a knapsack problem. A new approach has been adopted which focuses on maximizing the use of a server’s central processing unit resource considering a certain capacity threshold. The compared methods are classified; two belong to the category of the exact methods, i.e., branch and bound and dynamic programming, while the other two represent the approximate approach, i.e., genetic algorithm and ant colony optimization algorithm. Experimental results show that the metaheuristic ant colony optimization algorithm outperforms the other three algorithms in terms of efficiency.

  相似文献   

6.
Server consolidation is very attractive for cloud computing platforms to improve energy efficiency and resource utilization. Advances in multi-core processors and virtualization technologies have enabled many workloads to be consolidated in a physical server. However, current virtualization technologies do not ensure performance isolation among guest virtual machines, which results in degraded performance due to contention in shared resources along with violation of service level agreement (SLA) of the cloud service. In that sense, minimizing performance interference among co-located virtual machines is the key factor of successful server consolidation policy in the cloud computing platforms. In this work, we propose a performance model that considers interferences in the shared last-level cache and memory bus. Our performance interference model can estimate how much an application will hurt others and how much an application will suffer from others. We also present a virtual machine consolidation method called swim which is based on our interference model. Experimental results show that the average performance degradation ratio by swim is comparable to the optimal allocation.  相似文献   

7.
已有针对虚拟机映射问题的研究,主要以提高服务器资源及能耗效率为目标.综合考虑虚拟机映射过程中对服务器及网络设备能耗的影响,在对物理服务器、虚拟机资源及状态,虚拟机映射、网络通信矩阵等概念定义的基础上,对协同能耗优化及网络优化的虚拟机映射问题进行了建模.将问题抽象为多资源约束下的装箱问题与二次分配QAP问题,并设计了基于蚁群算法ACO与局部搜索算法2-exchange结合的虚拟机映射算法CSNEO来进行问题的求解.通过与MDBP-ACO、vector-VM等四种算法的对比实验结果表明:CSNEO算法一方面在满足多维资源约束的前提下,实现了更高的虚拟机映射效率;另一方面,相比只考虑网络优化的虚拟机放置算法,CSNEO在实现网络优化的同时具有更好的能耗效率.  相似文献   

8.
The consolidation of multiple workloads and servers enables the efficient use of server and power resources in shared resource pools. We employ a trace-based workload placement controller that uses historical information to periodically and proactively reassign workloads to servers subject to their quality of service objectives. A reactive migration controller is introduced that detects server overload and underload conditions. It initiates the migration of workloads when the demand for resources exceeds supply. Furthermore, it dynamically adds and removes servers to maintain a balance of supply and demand for capacity while minimizing power usage. A host load simulation environment is used to evaluate several different management policies for the controllers in a time effective manner. A case study involving three months of data for 138 SAP applications compares three integrated controller approaches with the use of each controller separately. The study considers trade-offs between: (i) required capacity and power usage, (ii) resource access quality of service for CPU and memory resources, and (iii) the number of migrations. Our study sheds light on the question of whether a reactive controller or proactive workload placement controller alone is adequate for resource pool management. The results show that the most tightly integrated controller approach offers the best results in terms of capacity and quality but requires more migrations per hour than the other strategies.  相似文献   

9.
Virtualisation and cloud computing have recently received significant attention. Resource allocation and control of multiple resource usages among virtual machines in virtualised data centres remains an open problem. Therefore, in this paper, our focus is to control CPU (central processing unit) usage and memory consumption of a virtual database machine in a data centre under a time-varying heavy workload. In addition to existing work, we attempt to control multiple outputs, such as the CPU usage and memory consumption of a virtualised database server (DBVM), via changing multiple server parameters, such as the CPU allocation and memory allocation, in real time. We indicated that a virtualised database server might be modelled as a linear time-unvarying system. We obtained and compared both MIMO (multi input–multi output) and multiple SISO (single input–single output) models of that system. We designed multiple SISO feedback controllers to achieve desired CPU usages and memory consumptions under workload.  相似文献   

10.
Virtualization provides a vehicle to manage the available resources and enhance their utilization in network computing. System dynamics requires virtual machines be distributed and reconfigurable. To construct reconfigurable distributed virtual machines, service migration moves the runtime services among physical servers when necessary. By incorporating the mobile agent technology, distributed virtual machines can improve their resource utilization and service availability significantly. This paper focuses on finding the optimal migration policies for service and agent migrations for high throughput in reconfigurable distributed virtual machines. We analyze three issues of this decision problem: migration candidate determination, migration timing and destination server selection. The service migration timing and destination server selection are formulated by two optimization models. We derive the optimal migration policy for distributed and heterogenous systems based on stochastic optimization theories. Renewal processes are applied to model the dynamics of migration. We solve the agent migration problem by dynamic programming and extend the optimal service migration decision by considering the interplay of the hybrid mobility. We verify the accuracy of our migration decision policy in simulations.  相似文献   

11.
Task scheduling in heterogeneous environments such as cloud data centers is considered to be an NP-complete problem. Efficient task scheduling will lead to balance the load on the virtual machines (VMs) thereby achieving effective resource utilization. Hence there is a need for a new scheduling framework to perform load balancing amid considering multiple quality of service (QoS) metrics such as makespan, response time, execution time, and task priority. Multi-core Web server is difficult to achieve dynamic balance in the process of remote dynamic request scheduling, so it is necessary to improve it based on the traditional scheduling algorithm to enhance the actual effect of the algorithm. This article do research on the multi-core Web server, Focusing on multi-core Web server queuing model. On this basis, the author draws the drawbacks of the multi-core Web server in the remote dynamic request scheduling algorithm, and improves the traditional algorithm with the demand analysis. Not only it overcomes the drawbacks of traditional algorithms, but also promotes the system threads carrying the same amount of tasks, and promotes the server being always in a dynamic balance. On the basis of this, it achieves an effective solution to customer requests.  相似文献   

12.
云数据中心的规模日益增长导致其产生的能源消耗及成本呈指数级增长.虚拟机的放置是提高云计算环境服务质量与节约成本的核心.针对传统的虚拟机放置算法存在考虑目标单一化和多目标优化难以找到最优解的问题,提出一种面向能耗、资源利用率、负载均衡的多目标优化虚拟机放置模型.通过改进蚁群算法求解优化模型,利用其信息素正反馈机制和启发式...  相似文献   

13.
IaaS公有云平台调度模型研究   总被引:3,自引:2,他引:1  
抽象出IaaS公有云平台的服务模型,基于排队论对平台服务模式、队列长度、调度服务器设置等进行了优化分析。在此基础上提出一种基于IaaS平台需求向量的调度模型,根据需求与可用资源的匹配度从平台管理的物理机集合中筛选出可用的宿主机,若一次性无法找到符合要求的宿主机,平台调度算法结合虚拟机迁移操作,对物理资源进行重新分配,在实现平台资源利用率最大化的同时,保障了平台的可用性。将该算法应用在自主研发的云计算平台上,实验结果验证了该算法的可行性。  相似文献   

14.
Grouping and sequencing PCB assembly jobs with minimum feeder setups   总被引:2,自引:0,他引:2  
Printed circuit boards are manufactured in automated assembly lines, where high-speed placement machines put components on the boards. These machines are frequently bottlenecks in the production. This is true especially in high-mix low-volume environments, where time consuming setup operations of the machines have to be done repeatedly. Therefore, one can improve productivity with a proper setup strategy. The most popular strategies are the so-called group setup strategy and minimum setup strategy. In this paper, we consider the machine setup problem as a combination of a job grouping problem and a minimum setup problem. We formulate this joint problem as an Integer Programming model, where the objective is to minimize the weighted sum of the number of setup occasions and the total number of component feeder changes. We also present and evaluate hybrid algorithms based on both grouping and minimum setup heuristics. The best results are achieved by a method which uses both these strategies simultaneously.  相似文献   

15.
虚拟机放置问题是云数据中心资源调度的核心问题之一,它对数据中心的性能、资源利用率和能耗有着重要的影响。针对此问题,以降低数据中心能耗、改善资源利用率和保证服务质量(QoS)为优化目标,借助模糊聚类的思想提出了一种基于模糊隶属度的虚拟机放置算法。首先,结合物理主机过载概率和虚拟机与物理主机之间的相适性放置关系,提出了新的距离度量方法;然后,根据模糊隶属度函数计算得出虚拟机与物理主机之间的相适性模糊隶属度矩阵;最后,借助能耗感知机制,在模糊隶属度矩阵中进行局部搜索从而获得迁移虚拟机的最优放置方案。仿真实验结果表明,提出的算法在降低云数据中心能耗、改善资源利用率和保证QoS方面表现比较优异。  相似文献   

16.
网络功能虚拟化(NFV)技术的出现使得网络功能由虚拟网络功能(VNF)提供, 从而提高网络的灵活性, 可扩展性和成本效益. 然而, NFV面临一个重要挑战是, 如何有效地将VNF放置不同的网络位置并链接起来引导流量, 同时最大限度减少能源消耗. 此外, 面对网络服务质量要求, 提高服务接受率对于网络性能也是至关重要的. 为了解决这些问题, 本文研究了NFV中的VNF放置和链接(VNFPC), 以最大化服务接受率同时权衡优化能源消耗. 因此, 在NFV中设计了一种基于Actor-Critic深度强化学习(DRL)的能源高效的VNFPC方法, 称为ACDRL-VNFPC. 该方法应用了适应性共享方案, 通过在多服务之间共享同类型VNF和多VNF共享同一个服务器来实现节能. 实验结果表明, 提出的算法有效权衡了能耗和服务接受率, 并且, 在执行时间方面也得到了优化. 与基准算法相比, ACDRL-VNFPC在服务接受率, 能耗和执行时间方面性能分别提高了2.39%, 14.93%和16.16%.  相似文献   

17.
刘开南 《计算机应用》2019,39(11):3333-3338
为了节省云数据中心的能量消耗,提出了几种基于贪心算法的虚拟机(VM)迁移策略。这些策略将虚拟机迁移过程划分为物理主机状态检测、虚拟机选择和虚拟机放置三个步骤,并分别在虚拟机选择和虚拟机放置步骤中采用贪心算法予以优化。提出的三种迁移策略分别为:最小主机使用效率选择且最大主机使用效率放置算法MinMax_Host_Utilization、最大主机能量使用选择且最小主机能量使用放置算法MaxMin_Host_Power_Usage、最小主机计算能力选择且最大主机计算能力放置算法MinMax_Host_MIPS。针对物理主机处理器使用效率、物理主机能量消耗、物理主机处理器计算能力等指标设置最高或者最低的阈值,参考贪心算法的原理,在指标上超过或者低于这些阈值范围的虚拟机都将进行迁移。利用CloudSim作为云数据中心仿真环境的测试结果表明,基于贪心算法的迁移策略与CloudSim中已存在的静态阈值迁移策略和绝对中位差迁移策略比较起来,总体能量消耗少15%,虚拟机迁移次数少60%,平均SLA违规率低5%。  相似文献   

18.
李小六  张曦煌 《计算机应用》2013,33(12):3586-3590
针对云计算的资源管理问题,提出了云计算数据中心的能量模型以及四个虚拟机放置算法。首先计算每个机架上主机的负载并根据设定的阈值进行归类,然后采用最少迁移策略从主机上选择合适迁移的虚拟机并且接受新的虚拟机分配请求,对每个虚拟机与主机集合进行匹配,选择最优化的主机进行放置。实验结果表明,与现有的能量感知资源分配方法相比,该方法在主机、网络设备以及冷却系统方面能量利用率分别提高了2.4%,18.5%和28.1%,总的能量利用率平均提高了14.5%。  相似文献   

19.
In this paper, we propose a server architecture recommendation and automatic performance verification technology, which recommends and verifies appropriate server architecture on Infrastructure as a Service (IaaS) cloud with bare metal servers, container-based virtual servers and virtual machines. Recently, cloud services are spread, and providers provide not only virtual machines but also bare metal servers and container-based virtual servers. However, users need to design appropriate server architecture for their requirements based on three types of server performances, and users need much technical knowledge to optimize their system performance. Therefore, we study a technology that satisfies users’ performance requirements on these three types of IaaS cloud. Firstly, we measure performance and start-up time of a bare metal server, Docker containers, KVM (Kernel-based Virtual Machine) virtual machines on OpenStack with changing number of virtual servers. Secondly, we propose a server architecture recommendation technology based on the measured quantitative data. A server architecture recommendation technology receives an abstract template of OpenStack Heat and function/performance requirements and then creates a concrete template with server specification information. Thirdly, we propose an automatic performance verification technology that executes necessary performance tests automatically on provisioned user environments according to the template. We implement proposed technologies, confirm performance and show the effectiveness.  相似文献   

20.
In the mobile wireless computing environment of the future, a large number of users, equipped with low-powered palmtop machines, will query databases over wireless communication channels. Palmtop-based units will often be disconnected for prolonged periods of time, due to battery power saving measures; palmtops also will frequently relocate between different cells, and will connect to different data servers at different times. Caching of frequently accessed data items will be an important technique that will reduce contention on the narrow-bandwidth, wireless channel. However, cache individualization strategies will be severely affected by the disconnection and mobility of the clients. The server may no longer know which clients are currently residing under its cell, and which of them are currently on. We propose a taxonomy of different cache invalidation strategies, and study the impact of clients' disconnection times on their performance. We study ways to improve further the efficiency of the invalidation techniques described. We also describe how our techniques can be implemented over different network environments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号