首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the virtualized environment, multiple virtual machines (VMs) sharing the same physical host are vulnerable to resource competition, which may cause performance interference among VMs and thus lead to VM performance degradation. This paper focuses on measuring CPU, memory, I/O, and the overall VM performance degradation caused by the performance interference according to the properties in the runtime environment of VMs. To this end, we adopt Bayesian network (BN), as the framework for uncertainty representation and inference, and construct a VM property‐performance BN (VPBN) with hidden variables, which represent the unobserved performance degradation of CPU, memory, and I/O, respectively. Then, we present the method to measure performance degradation of VMs by probabilistic inferences with the VPBN. Experimental results show the accuracy and efficiency of our method.  相似文献   

2.
Cloud computing is on the horizon of the domain of information technology over the recent few years, giving different remotely accessible services to the cloud users. The quality-of-service (QoS) maintaining of a cloud service provider is the most dominating research issue today. The QoS embraces with different issues like virtual machine (VM) allocation, optimization of response time and throughput, utilizing processing capability, load balancing etc. VM allocation policy deals with the allocation of VMs to the hosts in different datacenters. This paper highlights a new VM allocation policy that distributes the load of VMs among hosts which improves the utilization of hosts’ processing capability as well as makespan and throughput of cloud system. The experimental results are obtained by utilizing trace based simulation in CloudSim 3.0.3 and compared with existing VM allocation policies.  相似文献   

3.
Technology providers heavily exploit the usage of edge-cloud data centers (ECDCs) to meet user demand while the ECDCs are large energy consumers. Concerning the decrease of the energy expenditure of ECDCs, task placement is one of the most prominent solutions for effective allocation and consolidation of such tasks onto physical machine (PM). Such allocation must also consider additional optimizations beyond power and must include other objectives, including network-traffic effectiveness. In this study, we present a multi-objective virtual machine (VM) placement scheme (considering VMs as fog tasks) for ECDCs called TRACTOR , which utilizes an artificial bee colony optimization algorithm for power and network-aware assignment of VMs onto PMs. The proposed scheme aims to minimize the network traffic of the interacting VMs and the power dissipation of the data center's switches and PMs. To evaluate the proposed VM placement solution, the Virtual Layer 2 (VL2) and three-tier network topologies are modeled and integrated into the CloudSim toolkit to justify the effectiveness of the proposed solution in mitigating the network traffic and power consumption of the ECDC. Results indicate that our proposed method is able to reduce power energy consumption by 3.5% while decreasing network traffic and power by 15% and 30%, respectively, without affecting other QoS parameters.  相似文献   

4.
The problem of efficient placement of virtual machines (VMs) in cloud computing infrastructure is well studied in the literature. VM placement decision involves selecting a physical machine in the data center to host a specific VM. This decision could play a pivotal role in yielding high efficiency for both the cloud and its users. Also, reallocation of VMs could be performed through migrations to achieve goals like higher server consolidation or power saving. VM placement and reallocation decisions may consider affinities such as memory sharing, CPU processing, disk sharing, and network bandwidth requirements between VMs defined in multiple dimensions. Considering the NP‐hard complexity associated with computing an optimal solution for this VM placement decision problem, existing research employs heuristic‐based techniques to compute an efficient solution. However, most of these approaches are restricted to only a single attribute at a time. That is, a given technique of using heuristics to compute VM placement considers only a single attribute, while completely ignoring the impact of other dimensions of placing VMs. While this approach may improve the efficiency with respect to the affinity attribute in consideration, it may yield degraded performance with respect to other affinities. In addition, the criteria for determining VM‐placement efficiency may vary for different applications. Hence, the overall goal of achieving VM placement efficiency becomes difficult and challenging. We are motivated by this challenging problem of efficient VM placement and propose policy‐aware virtual machine management (PAVM), a generic framework that can be used for efficient VM management in a cloud computing platform based on the service provider‐defined policies to achieve the desired system‐wide goals. This involves efficient means to profile different VM affinities and to use profiled information effectively by intelligent and efficient VM migrations at run time considering multiple attributes at a time. By conducting extensive evaluation through simulation and real experiments that involve VM affinities on the basis of network and memory, we confirmed that the PAVM architecture is capable of improving the efficiency of a cloud system. We elaborate the architecture of a PAVM system, describe its implementation, and present details of our experiments. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
Nowadays, the consolidation of application servers is the most common use for current virtualization solutions. Each application server takes the form of a virtual machine (VM) that can be hosted into one physical machine. In a default Xen implementation, the scheduler is configured to handle equally all of the VMs that run on a single machine. As a consequence, the scheduler shares equally all of the available physical CPU resources among the running VMs. However, when the applications that run in the VM dynamically change their resource requirements, a different solution is needed. Furthermore, if the resource usage is associated with service-level agreements, a predefined equal share of the processor power is insufficient for the VMs. Within the Xen’s primitives, even though it is possible to tune the scheduler parameters, there is no tool to achieve the dynamic change of the share of the processor power assigned to each VM. A combination of a number of primitives, however, appears to be suited as a base for achieving this. In this paper, we present an approach to efficiently manage the quality of service (QoS) of virtualized resources in multicore machines. We evaluate different alternatives within Xen for building an enhanced management of virtual CPU resources. We compare these alternatives in terms of performance, flexibility, and ease of use. We devise an architecture to build a high-level service that combines interdomain communication mechanisms with monitoring and control primitives for local resource management. We achieve this by our solution, a local resource manager (LRM), which adjusts the resources needed by each VM according to an agreed QoS. The LRM has been implemented as a prototype and deployed on Xen-virtualized machines. By means of experiments, we show that the implemented management component can meet the service-level objectives even under dynamic conditions by adapting the resources assigned to the virtualized machines according to demand. With the LRM, we therefore achieve both fine-grain resource allocation and efficient assignment.  相似文献   

6.
廖剑伟  陈善雄  李莉 《通信学报》2012,33(Z1):157-164
定量分析了不同应用程序的内存分块数据相异部分,即某一阶段的内存有效改动页面分块与其他未变化内存分块的数据相异数据比例,提出基于内存数据相异部分的虚拟机同步机制,主虚拟机端通过基于地址和内容的散列函数表寻找与Dirty内存页面分块的最优匹配Non-dirty页面分块,相异数据通过XOR压缩后通过网络发送给备份虚拟机;备份虚拟机解码接收到的同步数据,重组在主虚拟机端的Dirty内存页面,从而完成备份虚拟机的状态同步操作。实验结果表明,与传统的标准异步方式相比,基于内存分块相异数据的虚拟机同步机制可以减少80%左右的同步操作带来的网络通信数据量,大大提高了某些基准测试程序的系统性能。  相似文献   

7.
In IaaS Cloud,different mapping relationships between virtual machines(VMs) and physical machines(PMs) cause different resource utilization,so how to place VMs on PMs to reduce energy consumption is becoming one of the major concerns for cloud providers.The existing VM scheduling schemes propose optimize PMs or network resources utilization,but few of them attempt to improve the energy efficiency of these two kinds of resources simultaneously.This paper proposes a VM scheduling scheme meeting multiple resource constraints,such as the physical server size(CPU,memory,storage,bandwidth,etc.) and network link capacity to reduce both the numbers of active PMs and network elements so as to finally reduce energy consumption.Since VM scheduling problem is abstracted as a combination of bin packing problem and quadratic assignment problem,which is also known as a classic combinatorial optimization and NP-hard problem.Accordingly,we design a twostage heuristic algorithm to solve the issue,and the simulations show that our solution outperforms the existing PM- or network-only optimization solutions.  相似文献   

8.
Cloud computing has drastically reduced the price of computing resources through the use of virtualized resources that are shared among users. However, the established large cloud data centers have a large carbon footprint owing to their excessive power consumption. Inefficiency in resource utilization and power consumption results in the low fiscal gain of service providers. Therefore, data centers should adopt an effective resource-management approach. In this paper, we present a novel load-balancing framework with the objective of minimizing the operational cost of data centers through improved resource utilization. The framework utilizes a modified genetic algorithm for realizing the optimal allocation of virtual machines (VMs) over physical machines. The experimental results demonstrate that the proposed framework improves the resource utilization by up to 45.21%, 84.49%, 119.93%, and 113.96% over a recent and three other standard heuristics-based VM placement approaches.  相似文献   

9.
The scalability, reliability, and flexibility in the cloud computing services are the obligations in the growing demand of computation power. To sustain the scalability, a proper virtual machine migration (VMM) approach is needed with apt balance on quality of service and service‐level agreement violation. In this paper, a novel VMM algorithm based on Lion‐Whale optimization is developed by integrating the Lion optimization algorithm and the Whale optimization algorithm. The optimal virtual machine (VM) migration is performed by the Lion‐Whale VMM based on a new fitness function in the regulation of the resource use, migration cost, and energy consumption of VM placement. The experimentation of the proposed VM migration strategy is performed over 4 cloud setups with a different configuration which are simulated using CloudSim toolkit. The performance of the proposed method is validated over existing optimization‐based VMM algorithms, such as particle swarm optimization and genetic algorithm, using the performance measures, such as energy consumption, migration cost, and resource use. Simulation results reveal the fact that the proposed Lion‐Whale VMM effectively outperforms other existing approaches in optimal VM placement for cloud computing environment with reduced migration cost of 0.01, maximal resource use of 0.36, and minimal energy consumption of 0.09.  相似文献   

10.

In cloud computing, more often times cloud assets are underutilized because of poor allocation of task in virtual machine (VM). There exist inconsistent factors affecting the scheduling tasks to VMs. In this paper, an effective scheduling with multi-objective VM selection in cloud data centers is proposed. The proposed multi-objective VM selection and optimized scheduling is described as follows. Initially the input tasks are gathered in a task queue and tasks computational time and trust parameters are measured in the task manager. Then the tasks are prioritized based on the computed measures. Finally, the tasks are scheduled to the VMs in host manager. Here, multi-objectives are considered for VM selection. The objectives such as power usage, load volume, and resource wastage are evaluated for the VMs and the entropy is calculated for the measured objectives and based on the entropy value krill herd optimization algorithm prioritized tasks are scheduled to the VMs. The experimental results prove that the proposed entropy based krill herd optimization scheduling outperforms the existing general krill herd optimization, cuckoo search optimization, cloud list scheduling, minimum completion cloud, cloud task partitioning scheduling and round robin techniques.

  相似文献   

11.
In recent years, the increasing use of cloud services has led to the growth and importance of developing cloud data centers. One of the challenging issues in the cloud environments is high energy consumption in data centers, which has been ignored in the corporate competition for developing cloud data centers. The most important problems of using large cloud data centers are high energy costs and greenhouse gas emission. So, researchers are now struggling to find an effective approach to decreasing energy consumption in cloud data centers. One of the preferred techniques for reducing energy consumption is the virtual machines (VMs) placement. In this paper, we present a VM allocation algorithm to reduce energy consumption and Service Level Agreement Violation (SLAV). The proposed algorithm is based on best‐fit decreasing algorithm, which uses learning automata theory, correlation coefficient, and ensemble prediction algorithm to make better decisions in VM allocation. The experimental results indicated improvement regarding energy consumption and SLAV, compared with well‐familiar baseline VM allocation algorithms.  相似文献   

12.
One of the key technologies in cloud computing is virtualization. Using virtualization, a system can optimize usage of resources, simplify management of infrastructure and software, and reduce hardware requirements. This research focuses on infrastructure as a service, resource allocation by providers for consumers, and explores the optimization of system utilization based on actual service traces of a real world cloud computing site. Before activating additional virtual machines (VM) for applications, the system examines CPU usage in the resource pools. The behavior of each VM can be estimated by monitoring the CPU usage for different types of services, and consequently, additional resources added or idle resources released. Based on historical observations of the required resources for each kind of service, the system can efficiently dispatch VMs. The proposed scheme can efficiently and effectively distribute resources to VMs for maximizing utilization of the cloud computing center. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
Cloud computing introduced a new paradigm in IT industry by providing on‐demand, elastic, ubiquitous computing resources for users. In a virtualized cloud data center, there are a large number of physical machines (PMs) hosting different types of virtual machines (VMs). Unfortunately, the cloud data centers do not fully utilize their computing resources and cause a considerable amount of energy waste that has a great operational cost and dramatic impact on the environment. Server consolidation is one of the techniques that provide efficient use of physical resources by reducing the number of active servers. Since VM placement plays an important role in server consolidation, one of the main challenges in cloud data centers is an efficient mapping of VMs to PMs. Multiobjective VM placement is generating considerable interest among researchers and academia. This paper aims to represent a detailed review of the recent state‐of‐the‐art multiobjective VM placement mechanisms using nature‐inspired metaheuristic algorithms in cloud environments. Also, it gives special attention to the parameters and approaches used for placing VMs into PMs. In the end, we will discuss and explore further works that can be done in this area of research.  相似文献   

14.
The next generation video surveillance systems are expected to face challenges in providing computation support for an unprecedented amount of video streams from multiple video cameras in a timely and scalable fashion. Cloud computing offers huge computation resources for large-scale storage and processing on demand, which are deemed suitable for video surveillance tasks. Cloud also provides quality of service guaranteed hardware and software solutions with the virtual machine (VM) technology using a utility-like service costing model. In cloud-based video surveillance context, the resource requests to handle video surveillance tasks are translated in the form of VM resource requests, which in turn are mapped to VM resource allocation referring to physical server resources hosting the VMs. Due to the nature of video surveillance tasks, these requests are highly time-constrained, heterogeneous and dynamic in nature. Hence, it is very challenging to actually manage the cloud resources from the perspective of VM resource allocation given the stringent requirements of video surveillance tasks. This paper proposes a computation model to efficiently manage cloud resources for surveillance tasks allocation. The proposed model works on optimizing the trade-off between average service waiting time and long-term service cost, and shows that long-term service cost is inversely proportional to high and balanced utilization of cloud resources. Experiments show that our approach provides a near-optimal solution for cloud resource management when handling the heterogeneous and unpredictable video surveillance tasks dynamically over next generation network.  相似文献   

15.
As the number of Virtual Machines (VMs) consolidated on single physical server increases with the rapid advance of server hardware, virtual network turns complex and frangible. Modern Network Security Engines (NSE) are introduced to eradicate the intrusions occurring in the virtual network. In this paper, we point out the inadequacy of the present live migration implementation, which hinders itself from providing transparent VM relocation between hypervisors equipped with Network Security Engines (NSE H). This occurs because the current implementation ignores VM related Security Context (SC) required by NSEs embedded in NSE H. We present the CoM, a comprehensive live migration framework, for NSE H based virtualization computing environment. We built a prototype system on Xen hypervisors to evaluate our framework, and conduct experiments under various realistic application environments. The results demonstrate that our solution successfully fixes the inadequacy of the present live migration implementation, and the performance overhead is negligible.  相似文献   

16.
With the daily increase in the number of cloud users and the volume of submitted workloads, load balancing (LB) over clouds followed by a reduction in users' response time is emerging as a vital issue. To successfully address the LB problem, we have optimized workload distribution among virtual machines (VMs). This approach consists of two parts: Firstly, a meta‐heuristic method based on biogeographical optimization for workload dispatching among VMs is introduced; secondly, we propose an innovative heuristic algorithm inspired by the “Banker algorithm” that runs in core scheduler to control and avoid VM overloads. The combination of these two (meta‐)heuristic algorithms constitutes an LB approach through which we have been able to reduce the value of the makespan to a reasonable time frame. Moreover, an information base repository (IBR) is introduced to maintain the online processing status of physical machines (PMs) and VMs. In our approach, data stored in IBR are retrieved when needed. This approach is compared with well‐known (non‐)evolutionary approaches, such as round‐robin, max‐min, MGGS, and TBSLB‐PSO. Experimental results reveal that our proposed approach outperforms its counterparts in a heterogeneous environment when the resources are smaller than the workloads. Moreover, the utilization of physical resources gradually increases. Therefore, optimal workload scheduling, as well as the lack of overload occurrence, results in a reduction in makespan.  相似文献   

17.
Virtual machine (VM) migration enables flexible and efficient resource management in modern data centers. Although various VM migration algorithms have been proposed to improve the utilization of physical resources in data centers, they generally focus on how to select VMs to be migrated only according to their resource requirements and ignore the relationship between the VMs and servers with respect to their varying resource usage as well as the time at which the VMs should be migrated. This may dramatically degrade the algorithm performance and increase the operating and the capital cost when the resource requirements of the VMs change dynamically over time. In this paper, we propose an integrated VM migration strategy to jointly consider and address these issues. First, we establish a service level agreement-based soft migration mechanism to significantly reduce the number of VM migrations. Then, we develop two algorithms to solve the VM and server selection issues, in which the correlation between the VMs and the servers is used to identify the appropriate VMs to be migrated and the destination servers for them. The experimental results obtained from extensive simulations show the effectiveness of the proposed algorithms compared to traditional schemes in terms of the rate of resource usage, the operating cost and the capital cost.  相似文献   

18.
This paper focus on providing a secure and trustworthy solution for virtual machine (VM) migration within an existing cloud provider domain, and/or to the other federating cloud providers. The infrastructure-as-a-service (IaaS) cloud service model is mainly addressed to extend and complement the previous Trusted Computing techniques for secure VM launch and VM migration case. The VM migration solution proposed in this paper uses a Trust_Token based to guarantee that the user VMs can only be migrated and hosted on a trustworthy and/or compliant cloud platforms. The possibility to also check the compliance of the cloud platforms with the pre-defined baseline configurations makes our solution compatible with an existing widely accepted standards-based, security-focused cloud frameworks like FedRAMP. Our proposed solution can be used for both inter- and intra-cloud VM migrations. Different from previous schemes, our solution is not dependent on an active (on-line) trusted third party; that is, the trusted third party only performs the platform certification and is not involved in the actual VM migration process. We use the Tamarin solver to realize a formal security analysis of the proposed migration protocol and show that our protocol is safe under the Dolev-Yao intruder model. Finally, we show how our proposed mechanisms fulfill major security and trust requirements for secure VM migration in cloud environments.  相似文献   

19.
An efficient task scheduling approach shows promising way to achieve better resource utilization in cloud computing. Various task scheduling approaches with optimization and decision‐making techniques have been discussed up to now. These approaches ignored scheduling conflict among the similar tasks. The conflict often leads to miss the deadlines of the tasks. The work studies the implementation of the MCDM (multicriteria decision‐making) techniques in backfilling algorithm to execute deadline‐based tasks in cloud computing. In general, the tasks are selected as backfill tasks, whose role is to provide ideal resources to other tasks in the backfilling approach. The selection of the backfill task is challenging one, when there are similar tasks. It creates conflict in the scheduling. In cloud computing, the deadline‐based tasks have multiple parameters such as arrival time, number of VMs (virtual machines), start time, duration of execution, and deadline. In this work, we present the deadline‐based task scheduling algorithm as an MCDM problem and discuss the MCDM techniques: AHP (Analytical Hierarchy Process), VIKOR (VIseKriterijumska Optimizacija I Kompromisno Resenje), and TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) to avoid similar task scheduling conflicts. We simulate the backfilling algorithm along with three MCDM mechanisms to avoid scheduling conflicts among the similar tasks. The synthetic workloads are considered to study the performance of the proposed scheduling algorithm. The mechanism suggests an efficient VM allocation and its utilization for deadline‐based tasks in the cloud environment.  相似文献   

20.
针对同一宿主计算机上多虚拟机之间数据交换开销大且带宽分配不灵活的问题,提出了一种硬件支持的多虚拟机数据交换及动态带宽分配方法,并进行建模和实验.该方法釆用IO虚拟化的思想,面向以太网控制器的硬件架构进行改进与优化.通过对虚拟机的发送数据进行解析,同时扩展发送引擎对接收BD环的访问权限,实现数据从发送引擎直接向目的虚拟机接收队列的交换过程;通过对虚拟机中接收队列的数据信息进行统计与分析,实现对各个虚拟机的带宽进行动态分配与调整.以自主研发的千兆以太网控制器为原型搭建测试平台进行实验.结果表明,本文提出的方法不仅减小了多虚拟机之间数据交换和带宽分配的CPU开销,而且对以太网控制器和虚拟机管理程序均保持了兼容.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号