首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
目前,运行在虚拟机上的客户操作系统(Guest OS)是面向物理机器开发的普通操作系统,其中存在不适应虚拟化环境的因素,影响虚拟机的I/O性能.作者通过测试发现了影响虚拟机I/O性能的一些问题,针对这些问题提出了优化方法:一方面,通过合并客户操作系统中连续的I/O指令,降低虚拟机的时钟中断频率,从而降低环境切换的开销;另一方面,消除客户操作系统中的冗余操作,包括在虚拟化环境下无效的函数、冗余的I/O调度以及虚拟网卡驱动对NAPI的支持,使虚拟机只执行必要的操作,从而提高系统的性能.  相似文献   

2.
为了桥接语义鸿沟,提升I/O性能,需要对执行不同类型负载的虚拟CPU(vCPU)采取不同的调度策略,故而虚拟CPU调度算法亟需优化。基于KVM虚拟化平台提出一种基于任务分类的虚拟CPU调度模型STC(virtual CPU scheduler based on task classification),它将虚拟CPU(vCPU)和物理CPU分别分为两个类型,分别为short vCPU和long vCPU,以及short CPU 和long CPU,不同类型的vCPU分配至对应类型的物理CPU上执行。同时,基于机器学习理论,STC构建分类器,通过提取任务行为特征将任务分为两类,I/O密集型的任务分配至short vCPU上,而计算密集型任务则分配至long vCPU上。STC在保证计算性能的基础上,提高了I/O的响应速度。实验结果表明,STC与系统默认的CFS相比,网络延时降低18%,网络吞吐率提高17%~25%,并且保证了整个系统的资源共享公平性。  相似文献   

3.
自适应调整虚拟机权重参数的调度方法   总被引:1,自引:0,他引:1  
在基于特权服务操作系统的虚拟机架构下客户操作系统需要借助特权服务操作系统来访问真实硬件,目前虚拟机调度算法的优化主要是侧重于I/O密集型虚拟机的研究,而忽视了CPU密集型虚拟机,更忽视了特权服务操作系统的I/O处理能力对虚拟机整体性能的影响.针对这些问题,提出了一种基于Credit算法的自适应调整虚拟机权重参数的优化调度方法,将特权服务操作系统的I/O处理能力作为虚拟机参数调整的一个重要参数,同时兼顾I/O密集型虚拟机和CPU密集型虚拟机对资源的需求.实验结果表明该方法能够及时根据当前的I/O请求数量和特权服务操作系统的处理能力合理调整虚拟机的权重参数,从而大大提高了客户操作系统CPU处理性能和硬件设备的访问性能.  相似文献   

4.
一种动态优先级排序的虚拟机I/O调度算法   总被引:1,自引:0,他引:1  
I/O任务调度是影响I/O密集型虚拟机性能的重要因素。现有调度方法主要是针对虚拟机整机I/O带宽的优化,较少兼顾各虚拟域与全局性能,也无法满足域间差异化服务的要求。针对现有方法的不足,提出了一种动态优先级排序的虚拟机I/O调度算法DPS。该算法基于多属性决策理论,以离差最大化方法计算I/O任务的优先级评估属性权重,对I/O任务优先级进行综合评估;通过引入任务所在虚拟域价值,体现云计算环境下虚拟域重要性差异。在Xen系统中通过实验评测DPS调度虚拟化网卡的性能,结果表明,DPS能够有效提高指定域与全局的I/O任务截止期保证率、整机I/O带宽,并能为不同虚拟域的I/O应用提供差异化服务。  相似文献   

5.
针对Credit调度算法不能保证实时性的不足提出两点改进。首先,当有大量I/O任务时对BOOST态虚拟CPU进行负载均衡来缩短系统响应时间。其次,利用动态时间片代替原来的固定时间片去适应虚拟CPU的动态变化。通过检测系统对任务的平均响应时间和周转时间来评估改进前和改进后对I/O任务的响应的影响。实验研究表明,改进之后的Credit调度算法平均响应时间与改进前相比降低了102.3%,可以显著提高I/O延迟敏感型应用的性能。  相似文献   

6.
目前,Xen虚拟机调度算法均采用独立调度虚拟CPU的方式,而没有考虑虚拟机各虚拟CPU之间的协同调度关系,这会使虚拟机各个虚拟CPU之间产生很大的时钟中断数量偏差等问题,从而导致系统不稳定.为了提高系统的稳定性,基于Credit算法提出了一种比RCS(relaxed co-scheduling)算法更松弛的协同调度算法MRCS(more relaxed co-scheduling).该算法采用非抢占式协同调整方法将各个虚拟CPU相对运行的时间间隔控制在同步时间检测的上限门限值Tmax之内,同时利用同步队列中虚拟CPU优化选择调度方法和Credit算法的虚拟CPU动态迁移方法,能够更加及时地协同处理虚拟CPU,并且保证了各个物理CPU的负载均衡,有效地减少客户操作系统与VMM的环境切换次数,降低了系统开销.实验结果证明该方法不但保证了系统的稳定性,而且使系统性能得到一定程度的提升.虚拟机调度算法不仅影响虚拟机的性能,更会影响虚拟机的稳定性,致力于虚拟机调度算法的研究是一项非常有意义的工作.  相似文献   

7.
硬件辅助虚拟化的提出,极大地提高了全虚拟化的性能。但是全虚拟化下类似网络教室等桌面虚拟化应用,在交互式性能方面还存在较大局限。交互式性能主要受I/O性能的影响:一方面I/O设备属于慢速设备;另一方面全虚拟化中采用模拟方式来共享设备。对全虚拟化的交互式性能进行了改进,充分利用多核处理器的物理特征来部署配置虚拟机,根据用户行为动态调整虚拟机的使用资源,依据虚拟化特征进行I/O调度的优化。最后,通过评测实验证明了改进方案的有效性。  相似文献   

8.
经过多年的发展,CPU虚拟化和内存虚拟化技术已经成熟,但是I/O虚拟化技术一直是虚拟化技术发展的瓶颈,也制约着整个系统虚拟化性能的提升.目前,Intel的VT-d技术已经能够实现Passthrough I/O功能,而且还通过VT-d和网卡虚拟化实现了SR-IOV技术,有效地解决了I/O虚拟化中的诸多问题.  相似文献   

9.
针对当前的虚拟化技术无法使各个虚拟机平等地或按特定比例地共享磁盘带宽、无法保证虚拟机间的I/O性能隔离的问题,基于Xen半虚拟化技术中的块IO请求处理过程,提出一种适用于Xen虚拟机间的磁盘I/O性能隔离算法-XIOS(XenI/O Scheduler)算法,在通用块层调度各虚拟机的块IO操作(bio结构),在I/O调度层保障延迟需求.实验结果表明该算法有效地在虚拟机间按比例地分配磁盘带宽.  相似文献   

10.
Xen中VCPU调度算法分析   总被引:1,自引:0,他引:1  
为了降低虚拟化环境中虚拟机的性能开销,提高虚拟化实施效率,在综合考虑虚拟处理器在虚拟机调度过程中的需求的基础上,对Xen中基于信用度的调度算法进行了分析,该算法在处理器密集型应用、多处理器调度和QoS控制方面具有明显的优势.针对目前调度算法在多处理器和新型虚拟机监控器结构下存在的性能问题,提出了自旋锁优先和处理器绑定等优化措施.实例表明,该措施能够提高虚拟处理器的调度效率.  相似文献   

11.
Virtualization technology is an effective approach to improving the energy-efficiency in cloud platforms; however, it also introduces many energy-efficiency losses especially when I/O virtualization is involved. In this paper, we present an energy-efficiency enhanced virtual machine (VM) scheduling policy, namely Share-Reclaiming with Collective I/O (SRC-I/O), with aiming at reducing the energy-efficiency losses caused by I/O virtualization. The proposed SRC-I/O scheduler allows VMs to reclaim extra CPU shares in certain conditions so as to increase CPU utilization. Meanwhile, it separates I/O-intensive VMs from CPU-intensive ones and schedules them in a collective manner, so as to reduce the context-switching cost when scheduling mixed workloads. Extensive experiments are conducted on various platforms to investigate the performance of the proposed scheduler. The results indicate that when the system is in presence of mixed workloads, SRC-I/O scheduler outperforms many existing VM schedulers in terms of energy-efficiency and I/O responsiveness.  相似文献   

12.
Virtualization poses new challenges to I/O performance. The single-root I/O virtualization (SR-IOV) standard allows an I/O device to be shared by multiple Virtual Machines (VMs), without losing performance. We propose a generic virtualization architecture for SR-IOV-capable devices, which can be implemented on multiple Virtual Machine Monitors (VMMs). With the support of our architecture, the SR-IOV-capable device driver is highly portable and agnostic of the underlying VMM. Because the Virtual Function (VF) driver with SR-IOV architecture sticks to hardware and poses a challenge to VM migration, we also propose a dynamic network interface switching (DNIS) scheme to address the migration challenge. Based on our first implementation of the network device driver, we deployed several optimizations to reduce virtualization overhead. Then, we conducted comprehensive experiments to evaluate SR-IOV performance. The results show that SR-IOV can achieve a line rate throughput (9.48 Gbps) and scale network up to 60 VMs, at the cost of only 1.76% additional CPU overhead per VM, without sacrificing throughput and migration.  相似文献   

13.
While virtualization enables multiple virtual machines (VMs)—with multiple operating systems and applications—to run within a physical server, it also complicates resource allocations trying to guarantee Quality of Service (QoS) requirements of the diverse applications running within these VMs. As QoS is crucial in the cloud, considerable research efforts have been directed towards CPU, memory and network allocations to provide effective QoS to VMs, but little attention has been devoted to disk resource allocation.This paper presents the design and implementation of Flubber, a two-level scheduling framework that decouples throughput and latency allocation to provide QoS guarantees to VMs while maintaining high disk utilization. The high-level throughput control regulates the pending requests from the VMs with an adaptive credit-rate controller, in order to meet the throughput requirements of different VMs and ensure performance isolation. Meanwhile, the low-level latency control, by the virtue of the batch and delay earliest deadline first mechanism (BD-EDF), re-orders all pending requests from VMs based on their deadlines, and batches them to disk devices taking into account the locality of accesses across VMs. We have implemented Flubber and made extensive evaluations on a Xen-based host. The results show that Flubber can simultaneously meet the different service requirements of VMs while improving the efficiency of the physical disk. The results also show improvement of up to 25% in the VM performance over state-of-art approaches: for example, in contract to the default Xen disk I/O scheduler—Completely Fair Queueing (CFQ)—besides achieving the desired QoS of each VM, Flubber speeds up the sequential and random reads by 17% and 25%, respectively, due to the efficient physical disk utilization.  相似文献   

14.
为改善虚拟化系统的cache隔离性,提高系统的整体性能,面向虚拟化环境设计并实现了一种cache动态划分算法。该算法采用页面着色的思想,通过为虚拟机分配私有颜色页面来实现cache的划分,同时能够根据虚拟机的cache需求为其动态调整cache容量。在Xen虚拟环境中实现了该算法。实验结果表明,该算法可以在较低开销的情况下,显著提高多虚拟机上并发程序的全局性能。  相似文献   

15.
Multicore systems are widely deployed in both the embedded and the high end computing infrastructures. However, traditional virtualization systems can not effectively isolate shared micro architectural resources among virtual machines (VMs) running on multicore systems. CPU and memory intensive VMs contending for these resources will lead to serious performance interference, which makes virtualization systems less efficient and VM performance less stable. In this paper, we propose a contention-aware performance prediction model on the virtualized multicore systems to quantify the performance degradation of VMs. First, we identify the performance interference factors and design synthetic micro-benchmarks to obtain VM’s contention sensitivity and intensity features that are correlated with VM performance degradation. Second, based on the contention features, we build VM performance prediction model using machine learning techniques to quantify the precise levels of performance degradation. The proposed model can be used to optimize VM performance on multicore systems. Our experimental results show that the performance prediction model achieves high accuracy and the mean absolute error is 2.83%.  相似文献   

16.
Multicore processors are widely used in today’s computer systems. Multicore virtualization technology provides an elastic solution to more efficiently utilize the multicore system. However, the Lock Holder Preemption (LHP) problem in the virtualized multicore systems causes significant CPU cycles wastes, which hurt virtual machine (VM) performance and reduces response latency. The system consolidates more VMs, the LHP problem becomes worse. In this paper, we propose an efficient consolidation-aware vCPU (CVS) scheduling scheme on multicore virtualization platform. Based on vCPU over-commitment rate, the CVS scheduling scheme adaptively selects one algorithm among three vCPU scheduling algorithms: co-scheduling, yield-to-head, and yield-to-tail based on the vCPU over-commitment rate because the actions of vCPU scheduling are split into many single steps such as scheduling vCPUs simultaneously or inserting one vCPU into the run-queue from the head or tail. The CVS scheme can effectively improve VM performance in the low, middle, and high VM consolidation scenarios. Using real-life parallel benchmarks, our experimental results show that the proposed CVS scheme improves the overall system performance while the optimization overhead remains low.  相似文献   

17.
随着虚拟化技术和云计算技术的发展,越来越多的高性能计算应用运行在云计算资源上.在基于虚拟化技术的高性能计算云系统中,高性能计算应用运行在多个虚拟机之中,这些虚拟机可能放置在不同的物理节点上.若多个通信密集型作业的虚拟机放置在相同的物理节点上,虚拟机之间将竞争物理节点的网络Ⅰ/O资源,如果虚拟机对网络Ⅰ/O资源的需求超过物理节点的网络Ⅰ/O带宽上限,将严重影响通信密集型作业的计算性能.针对虚拟机对网络Ⅰ/O资源的竞争问题,提出一种基于网络Ⅰ/O负载均衡的虚拟机放置算法NLPA,该算法采用网络Ⅰ/O负载均衡策略来减少虚拟机对网络Ⅰ/O资源的竞争.实验表明,与贪心算法进行比较,对于同样的高性能计算作业测试集,NLPA算法在完成作业的计算时间、系统中的网络Ⅰ/O负载吞吐率、网络Ⅰ/O负载均衡3个方面均有更好的表现.  相似文献   

18.
张天宇  关楠  邓庆绪 《计算机科学》2015,42(12):115-119
为了降低开销以及增加灵活性,通过虚拟化技术将多个系统运行在一个通用计算平台上已成为复杂实时嵌入式系统的趋势。Xen是近年来应用最广泛的虚拟化技术,对其默认使用的Credit调度算法进行实时性能分析,使得能够直接对运行在Xen上的实时系统进行可调度性测试,并且可以通过形式化的资源界限函数对Credit的实时性进行直观的评估。首先分析了Credit调度算法的基本实现,提出并且证明了一种配置VCPU参数的方法使得Credit的实时性得到提升,在此基础上,通过证明得到了Credit算法的基本性质,并得出其在最坏情况下为VCPU分配的资源函数曲线。  相似文献   

19.
Recently, with the growth of cyber physical systems (CPS), several applications have begun to deploy in the CPS for connecting the cyber space with the physical scale effectively. Besides, the cloud computing (CC) enabled CPS offers huge processing and storage resources for CPS that finds helpful for a range of application areas. At the same time, with the massive development of applications that exist in the CPS environment, the energy utilization of the cloud enabled CPS has gained significant interest. For improving the energy effectiveness of the CC platform, virtualization technologies have been employed for resource management and the applications are executed via virtual machines (VMs). Since effective scheduling of resources acts as an important role in the design of cloud enabled CPS, this paper focuses on the design of chaotic sandpiper optimization based VM scheduling (CSPO-VMS) technique for energy efficient CPS. The CSPO-VMS technique is utilized for searching for the optimum VM migration solution and it helps to choose an effective scheduling strategy. The CSPO algorithm integrates the concepts of traditional SPO algorithm with the chaos theory, which substitutes the main parameter and combines it with the chaos. In order to improve the process of determining the global optimum solutions and convergence rate of the SPO algorithm, the chaotic concept is included in the SPO algorithm. The CSPO-VMS technique also derives a fitness function to choose optimal scheduling strategy in the CPS environment. In order to demonstrate the enhanced performance of the CSPO-VMS technique, a wide range of simulations were carried out and the results are examined under varying aspects. The simulation results ensured the improved performance of the CSPO-VMS technique over the recent methods interms of different measures.  相似文献   

20.
In cloud computing, scheduling plays an eminent role while processing enormous jobs. The paralle jobs utmost need parallel processing capabilities which leads to CPU underutilization mainly due to synchronization and communication among parallel processes. Researchers introduced several algorithms for scheduleing parallel jobs namely, Conservative Migration Consolidation supported Backfilling (CMCBF) and Aggressive Migration Consolidation supported Backfilling (AMCBF). The greatest challenge of a existing scheduling algorithm is to improve the data center utilization without affecting job responsiveness. Hence, this work proposes an Effective Multiphase Scheduling Approach (EMSA) to process the jobs. In EMSA, the jobs are initially preprocessed and batched together to avoid starvation and to mitigate unwanted delay. Later, an Associate Priority Method has been proposed which prioritizes the batch jobs to minimize the number of migrations. Finally, the prioritized jobs are scheduled using Priority Scheduling with BackFilling algorithm to utilize the intermediate idle nodes. Moreover, the virtualization technology partitions the computing capacity of the Virtual Machine (VM) into two-tier VM as foreground VM (FVM) and Background VM (BVM) to improve node utilization. Hence, Priority Scheduling with Consolidation based BackFilling algorithm has been deployed in a two-tier VM that processes the jobs by utilizing the VMs effectively. Experimental results show that the performance of the proposed work performs better than other existing algorithms by increasing the resource utilization by 8%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号