首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
With cloud and utility computing models gaining significant momentum, data centers are increasingly employing virtualization and consolidation as a means to support a large number of disparate applications running simultaneously on a chip-multiprocessor (CMP) server. In such environments, contention for shared platform resources (CPU cores, shared cache space, shared memory bandwidth, etc.) can have a significant effect on each virtual machine’s performance. In this paper, we investigate the shared resource contention problem for virtual machines by: (a) measuring the effects of shared platform resources on virtual machine performance, (b) proposing a model for estimating shared resource contention effects, and (c) proposing a transition from a virtual machine (VM) to a virtual platform architecture (VPA) that enables transparent shared resource management through architectural mechanisms for monitoring and enforcement. Our measurement and modeling experiments are based on a consolidation benchmark (vConsolidate) running on a state-of-the-art CMP server. Our virtual platform architecture experiments are based on detailed simulations of consolidation scenarios. Through detailed measurements and simulations, we show that shared resource contention affects virtual machine performance significantly and emphasize that virtual platform architectures is a must for future virtualized datacenters.  相似文献   

2.
密码技术是云计算安全的基础,支持SR-IOV虚拟化的高性能密码卡适用于云密码机,可以为云计算环境提供虚拟化数据加密保护服务,满足安全需求.针对该类密码卡在云密码机使用过程中存在的兼容性不好、扩充性受限、迁移性差以及性价比低等问题,本文提出了基于I/O前后端模型的密码卡软件虚拟化方法,利用共享内存或者VIRTIO作为通信...  相似文献   

3.
The increasing deployment of artificial intelligence has placed unprecedent requirements on the computing power of cloud computing. Cloud service providers have integrated accelerators with massive parallel computing units in the data center. These accelerators need to be combined with existing virtualization platforms to partition the computing resources. The current mainstream accelerator virtualization solution is through the PCI passthrough approach, which however does not support fine-grained resource provisioning. Some manufacturers also start to provide time-sliced multiplexing schemes and use drivers to cooperate with specific hardware to divide resources and time slices to different virtual machines, which unfortunately suffer from poor portability and flexibility. One alternative but promising approach is based on API forwarding, which forwards the virtual machine''s request to the back-end driver for processing through a separate driver model. Yet, the communication due to API forwarding can easily become the performance bottleneck. This paper proposes Wormhole, an accelerator virtualization framework based on the C/S architecture that supports rapid delegated execution across virtual machines. It aims to provide upper-level users with an efficient and transparent way to accelerate the virtualization of accelerators with API forwarding while ensuring strong isolation between multiple users. By leveraging hardware virtualization feature, the framework minimizes performance degradation through exitless inter-VM control flow switch. Experimental results show that Wormhole''s prototype system can achieve up to 5 times performance improvement over the traditional open-source virtualization solution such as GVirtuS in the training test of the classic model.  相似文献   

4.
Cloud computing and virtualization technology have revolutionized general-purpose computing applications in the past decade. The cloud paradigm offers advantages through reduction of operation costs, server consolidation, flexible system configuration and elastic resource provisioning. However, despite the success of cloud computing for general-purpose computing, existing cloud computing and virtualization technology face tremendous challenges in supporting emerging soft real-time applications such as online video streaming, cloud-based gaming, and telecommunication management. These applications demand real-time performance in open, shared and virtualized computing environments. This paper identifies the technical challenges in supporting real-time applications in the cloud, surveys recent advancement in real-time virtualization and cloud computing technology, and offers research directions to enable cloud-based real-time applications in the future.  相似文献   

5.
云计算是新的一种面向市场的商业计算模式,向用户按需提供服务,云计算的商业特性使其关注向用户提供服务的服务质量。任务调度和资源分配是云计算中两个关键的技术,所使用的虚拟化技术使得其资源分配和任务调度有别于以往的并行分布式计算。目前主要的调度算法是借鉴网格环境下的调度策略,研究基于QoS的调度算法,存在执行效率较低的问题。我们对云工作流任务层调度进行深入研究,分析由底层资源虚拟化形成的虚拟机的特性,结合工作流任务的各类QoS约束,提出了基于虚拟机分时特性的任务层ACS调度算法。经过试验,我们提出的算法相比于文献[1]中的算法在对于较多并行任务的执行上存在较大的优势,能够很好的利用虚拟的分时特性,优化任务到虚拟机的调度。  相似文献   

6.
As a general purpose scalable parallel programming model for coding highly parallel applications, CUDA from NVIDIA provides several key abstractions: a hierarchy of thread blocks, shared memory, and barrier synchronization. It has proven to be rather effective at programming multithreaded many-core GPUs that scale transparently to hundreds of cores; as a result, scientists all over the industry and academia are using CUDA to dramatically expedite on production and codes. GPU-based clusters are likely to play an essential role in future cloud computing centers, because some computation-intensive applications may require GPUs as well as CPUs. In this paper, we adopted the PCI pass-through technology and set up virtual machines in a virtual environment; thus, we were able to use the NVIDIA graphics card and the CUDA high performance computing as well. In this way, the virtual machine has not only the virtual CPU but also the real GPU for computing. The performance of the virtual machine is predicted to increase dramatically. This paper measured the difference of performance between physical and virtual machines using CUDA, and investigated how virtual machines would verify CPU numbers under the influence of CUDA performance. At length, we compared CUDA performance of two open source virtualization hypervisor environments, with or without using PCI pass-through. Through experimental results, we will be able to tell which environment is most efficient in a virtual environment with CUDA.  相似文献   

7.
云计算是当前学术界和工业界都十分关注的热点,被广泛应用于针对海量数据和用户的大规模计算。云计算的特点要求计算机系统能够提供可伸缩的计算能力,而虚拟化技术正是其中的关键层次,在资源管理、服务器整合、提高资源利用率等方面发挥了巨大的作用。通过虚拟化技术,可以实现一个多层次的资源调度机制,以保证高资源利用率和系统性能:首先面向虚拟机的应用特征建立资源预测模型,然后依据预测结果建立资源分配策略,最终通过虚拟机间的资源动态优化技术,实现在同一物理主机或不同物理主机上虚拟机间动态的资源优化使用。这里,不仅要以物理机的宏观资源利用率作为管理依据,更需要关注虚拟机上应用程序在运行过程中的资源需求变化特征,从而为云计算提供一整套的虚拟化资源优化技术及使用方案,从静态部署、动态预测、单机资源动态调配、多机资源动态均衡调度、在线迁移等多个层次为云计算提供全面、有机的支撑。  相似文献   

8.
Cloud computing provides scalable computing and storage resources over the Internet. These scalable resources can be dynamically organized as many virtual machines (VMs) to run user applications based on a pay-per-use basis. The required resources of a VM are sliced from a physical machine (PM) in the cloud computing system. A PM may hold one or more VMs. When a cloud provider would like to create a number of VMs, the main concerned issue is the VM placement problem, such that how to place these VMs at appropriate PMs to provision their required resources of VMs. However, if two or more VMs are placed at the same PM, there exists certain degree of interference between these VMs due to sharing non-sliceable resources, e.g. I/O resources. This phenomenon is called as the VM interference. The VM interference will affect the performance of applications running in VMs, especially the delay-sensitive applications. The delay-sensitive applications have quality of service (QoS) requirements in their data access delays. This paper investigates how to integrate QoS awareness with virtualization in cloud computing systems, such as the QoS-aware VM placement (QAVMP) problem. In addition to fully exploiting the resources of PMs, the QAVMP problem considers the QoS requirements of user applications and the VM interference reduction. Therefore, in the QAVMP problem, there are following three factors: resource utilization, application QoS, and VM interference. We first formulate the QAVMP problem as an Integer Linear Programming (ILP) model by integrating the three factors as the profit of cloud provider. Due to the computation complexity of the ILP model, we propose a polynomial-time heuristic algorithm to efficiently solve the QAVMP problem. In the heuristic algorithm, a bipartite graph is modeled to represent all the possible placement relationships between VMs and PMs. Then, the VMs are gradually placed at their preferable PMs to maximize the profit of cloud provider as much as possible. Finally, simulation experiments are performed to demonstrate the effectiveness of the proposed heuristic algorithm by comparing with other VM placement algorithms.  相似文献   

9.
KVM虚拟化动态迁移技术的安全防护模型   总被引:2,自引:0,他引:2  
范伟  孔斌  张珠君  王婷婷  张杰  黄伟庆 《软件学报》2016,27(6):1402-1416
虚拟机动态迁移技术是在用户不知情的情况下使得虚拟机在不同宿主机之间动态地转移,保证计算任务的完成,具有负载均衡、解除硬件依赖、高效利用资源等优点,但此技术应用过程中会将虚拟机信息和用户信息暴露到网络通信中,其在虚拟化环境下的安全性成为广大用户担心的问题,逐渐成为学术界讨论和研究的热点问题.本文从研究虚拟化机制、虚拟化操作系统源代码出发,以虚拟机动态迁移的安全问题作为突破点,首先分析了虚拟机动态迁移时的内存泄漏安全隐患;其次结合KVM(Kernel-based Virtual Machine)虚拟化技术原理、通信机制、迁移机制,设计并提出一种新的基于混合随机变换编码方式的安全防护模型,该模型在虚拟机动态迁移时的迁出端和迁入端增加数据监控模块和安全模块,保证虚拟机动态迁移时的数据安全;最后通过大量实验,仿真测试了该模型的安全防护能力和对虚拟机运行性能的影响,仿真结果表明,该安全防护模型可以在KVM虚拟化环境下保证虚拟机动态迁移的安全,并实现了虚拟机安全性和动态迁移性能的平衡.  相似文献   

10.
Modern cloud data centers rely on server consolidation (the allocation of several virtual machines on the same physical host) to minimize their costs. Choosing the right consolidation level (how many and which virtual machines are assigned to a physical server) is a challenging problem, because contemporary multitier cloud applications must meet service level agreements in face of highly dynamic, nonstationary, and bursty workloads. In this paper, we deal with the problem of achieving the best consolidation level that can be attained without violating application service level agreements. We tackle this problem by devising fuzzy controller for consolidation and QoS (FC2Q), a resource management framework exploiting feedback fuzzy logic control, that is able to dynamically adapt the physical CPU capacity allocated to the tiers of an application in order to precisely match the needs induced by the intensity of its current workload. We implement FC2Q on a real testbed and use this implementation to demonstrate its ability of meeting the aforementioned goals by means of a thorough experimental evaluation, carried out with real‐world cloud applications and workloads. Furthermore, we compare the performance achieved by FC2Q against those attained by existing state‐of‐the‐art alternative solutions, and we show that FC2Q works better than them in all the considered experimental scenarios. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
在OpenStack云平台中,一台物理服务器上可能同时运行着十几台虚拟机,这对于物理服务器的I/O性能要求是非常高的.因此,I/O虚拟化技术的效率对于整个OpenStack云平台的网络性能提升都有着至关重要的作用.为了提高系统整体的网络性能,在OpenStack云平台中引入SR-IOV技术成为了一种可选的方式.本文通过对比实验测试了SR-IOV技术对于OpenStack云平台上网络I/O性能的影响.最终对实验结果进行分析可知,在引入SR-IOV技术后,OpenStack云平台上的计算节点I/O虚拟化性能提升了大概50%.  相似文献   

12.
通过对云计算数据中心提高资源利用率和用户QOS的需求进行分析,围绕其中的关键技术虚拟化技术,从服务器虚拟化、网络虚拟化、存储虚拟化三个方面进行总结,针对云计算平台建立其可用性模型,分析了IAAS,PAAS,SAAS三种云服务模式的可用性计算方法,最后通过实验验证了云计算可用性参考模型适用于常见的云服务模式。  相似文献   

13.
Recent developments in the field of virtualization technologies have led to renewed interest in performance evaluation of these systems. Nowadays, maturity of virtualization technology has made a fuss of provisioning IT services to maximize profits, scalability and QoS. This pioneer solution facilitates deployment of datacenter applications and grid and Cloud computing services; however, there are challenges. It is necessary to investigate a trade‐off among overall system performance and revenue and to ensure service‐level agreement of submitted workloads. Although a growing body of literature has investigated virtualization overhead and virtual machines interference, there is still lack of accurate performance evaluation of virtualized systems. In this paper, we present in‐depth performance measurements to evaluate a Xen‐based virtualized Web server. Regarding this experimental study; we support our approach by queuing network modeling. Based on these quantitative and qualitative analyses, we present the results that are important for performance evaluation of consolidated workloads on Xen hypervisor. First, demands of both CPU intensive and disk intensive workloads on CPU and disk are independent from the submitted rate to unprivileged domain when dedicated core(s) are pinned to virtual machines. Second, request response time not only depends on processing time at unprivileged domain but also pertains to amount of flipped pages at Domain 0. Finally, results show that the proposed modeling methodology performs well to predict the QoS parameters in both para‐virtualized and hardware virtual machine modes by knowing the request content size. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
张伟哲  张宏莉  张迪  程涛 《计算机学报》2011,34(12):2265-2277
虚拟化技术为云计算基础设施资源的动态部署、安全隔离提供了重要保证.从过度占用内存的虚拟机中回收内存,提供给内存紧缺的虚拟机使用,优化多虚拟机间的内存分布是内存虚拟化中的挑战性问题.文中引入了自发调节和全局调节协作的多虚拟机内存管理架构.通过定义内存资源充裕状态和内存资源紧缺状态,提出了自发调节和全局调节之间协作的算法....  相似文献   

15.
童俊杰  赫罡  符刚 《计算机科学》2016,43(Z6):249-254
随着云计算数据中心规模和数量的日益增长,以及虚拟化技术的普遍采用,虚拟机放置问题逐步成为产业界和学术界研究的热点。虚拟机放置策略和方法的选择对数据中心的能耗,物理资源的利用率和虚拟机性能具有重大影响。合理的放置方法和策略在保证上层应用和业务不受影响的同时,能有效降低云计算数据中心的能耗,提升物理资源利用率,减少物理资源的浪费。阐述了虚拟机放置问题中的3个基本要素:优化目标、约束限制和实现方法,并基于已有的研究工作进行归纳与总结。最后,结合已有成果,展望了未来的研究方向和亟待解决的关键问题。  相似文献   

16.
虚拟化云计算平台的能耗管理   总被引:15,自引:0,他引:15  
数据中心的高能耗是一个亟待解决的问题.近年来,虚拟化技术和云计算模式快速发展起来,因其具有资源利用率高、管理灵活、可扩展性好等优点,未来的数据中心将广泛采用虚拟化技术和云计算技术.将传统的能耗管理技术与虚拟化技术相结合,为云计算数据中心的能耗管理问题提供了新的解决思路,是一个重要的研究方向.文中从能耗测量、能耗建模、能耗管理实现机制、能耗管理优化算法4个方面对虚拟化云计算平台能耗管理的最新研究成果进行了介绍.论文分析了虚拟化云计算平台面临的操作管理和能耗管理两方面的问题,指出了虚拟化云计算平台能耗监控与测量的难点;介绍了能耗监测步骤及能耗轮廓分析方法;提出了虚拟机系统的整体能耗模型及服务器整合和在线迁移两种关键技术本身的能耗模型;从虚拟化层和云平台层两个层次总结了目前能耗管理机制方面取得的进展;并对能耗管理算法进行分类、比较.最后对全文进行总结,提出了未来十个值得进一步研究的方向.  相似文献   

17.
孙瑞辰  孙磊 《计算机科学》2015,42(Z11):218-221, 235
云计算平台和虚拟化技术的结合为虚拟机域间通信带来了新的需求,基于内存共享的域间通信可以提高运行在同一物理机上的虚拟机间的通信效率。但是,基于内存共享的域间过程中产生的上下文状态切换限制了其优化能力。引入一种新的内存共享模型PAMM,即通过添加一个管理模块对内存共享过程中所传递的内存页进行聚合管理,减少申请超级调用的次数,以达到减少状态切换的目的。实验表明,PAMM能够提升基于内存共享的域间通信的通信效率。  相似文献   

18.
Virtualization is a key technology to enable cloud computing. Driver domain based model for network virtualization offers isolation and high levels of flexibility. However, it suffers from poor performance and lacks scalability. In this paper, we evaluate networking performance of virtual machines within Xen. The I/O channel transferring packets between the driver domain and the virtual machines is shown to be the bottleneck. To overcome this limitation, we proposed a packet aggregation based mechanism to transfer packets from the driver domain to the virtual machines. Packet aggregation, combined with an efficient core allocation, allows virtual machines throughput to scale up by 700%, while minimizing both memory and CPU consumption. Besides, aggregation impact on packets delay and jitter remains acceptable. Hence, the proposed I/O virtualization model satisfies infrastructure providers to offer Cloud computing services.  相似文献   

19.
A resource management framework for collaborative computing systems over multiple virtual machines (CCSMVM) is presented to increase the performance of computing systems by improving the resource utilization, which has constructed a scalable computing environment for resource on-demand utilization. We design a resource management framework based on the advantages of some components in grid computing platform, virtualized platform and cloud computing platform to reduce computing systems overheads and maintain workloads balancing with the supporting of virtual appliance, Xen API, applications virtualization and so on. The content of collaborate computing, the basis of virtualized resource management and some key technologies including resource planning, resource allocation, resource adjustment and resource release and collaborative computing scheduling are designed in detail. A prototype is designed, and some experiments have verified the correctness and feasibility of our prototype. System evaluations show that the time in resource allocation and resource release is proportional to the quantity of virtual machines, but not the time in the virtual machines migrations. CCSMVM has higher CPU utilization and better performance than other systems, such as Eucalyptus 2.0, Globus4.0, et al. It is concluded that CCSMVM can accelerate the execution of systems by improving average CPU utilization from the results of comparative analysis with other systems, so it is better than others. Our study on resource management framework has some significance to the optimization of the performance in virtual computing systems.  相似文献   

20.
优化虚拟机部署是降低云数据中心能耗的有效方法,但是,过度对虚拟机部署进行合并可能导致主机机架出现热点,影响数据中心提供服务的可靠性。提出一种基于能效和可靠性的虚拟机部署算法。综合考虑主机利用率、主机温度、主机功耗、冷却系统功耗和主机可靠性间的相互关系,建立确保主机可靠性的冗余模型。在主动避免机架热点情况下,实现动态的虚拟机部署决策,在降低数据中心总体能耗前提下,确保主机服务可靠性。仿真结果表明,该算法不仅可以节省更多能耗,避免热点主机,而且性能保障上也更好。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号