首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Most of the current cloud computing providers allocate virtual machine instances to their users through fixed-price allocation mechanisms. We argue that combinatorial auction-based allocation mechanisms are especially efficient over the fixed-price mechanisms since the virtual machine instances are assigned to users having the highest valuation. We formulate the problem of virtual machine allocation in clouds as a combinatorial auction problem and propose two mechanisms to solve it. The proposed mechanisms are extensions of two existing combinatorial auction mechanisms. We perform extensive simulation experiments to compare the two proposed combinatorial auction-based mechanisms with the currently used fixed-price allocation mechanism. Our experiments reveal that the combinatorial auction-based mechanisms can significantly improve the allocation efficiency while generating higher revenue for the cloud providers.  相似文献   

2.
We design a task mapper TPCM for assigning tasks to virtual machines, and an application-aware virtual machine scheduler TPCS oriented for parallel computing to achieve a high performance in virtual computing systems. To solve the problem of mapping tasks to virtual machines, a virtual machine mapping algorithm (VMMA) in TPCM is presented to achieve load balance in a cluster. Based on such mapping results, TPCS is constructed including three components: a middleware supporting an application-driven scheduling, a device driver in the guest OS kernel, and a virtual machine scheduling algorithm. These components are implemented in the user space, guest OS, and the CPU virtualization subsystem of the Xen hypervisor, respectively. In TPCS, the progress statuses of tasks are transmitted to the underlying kernel from the user space, thus enabling virtual machine scheduling policy to schedule based on the progress of tasks. This policy aims to exchange completion time of tasks for resource utilization. Experimental results show that TPCM can mine the parallelism among tasks to implement the mapping from tasks to virtual machines based on the relations among subtasks. The TPCS scheduler can complete the tasks in a shorter time than can Credit and other schedulers, because it uses task progress to ensure that the tasks in virtual machines complete simultaneously, thereby reducing the time spent in pending, synchronization, communication, and switching. Therefore, parallel tasks can collaborate with each other to achieve higher resource utilization and lower overheads. We conclude that the TPCS scheduler can overcome the shortcomings of present algorithms in perceiving the progress of tasks, making it better than schedulers currently used in parallel computing.  相似文献   

3.
In the Cloud Computing market, a significant number of cloud providers offer Infrastructure as a Service (IaaS), including the capability of deploying virtual machines of many different types. The deployment of a service in a public provider generates a cost derived from the rental of the allocated virtual machines. In this paper we present LLOOVIA (Load Level based OpimizatiOn for VIrtual machine Allocation), an optimization technique designed for the optimal allocation of the virtual machines required by a service, in order to minimize its cost, while guaranteeing the required level of performance. LLOOVIA considers virtual machine types, different kinds of limits imposed by providers, and two price schemas for virtual machines: reserved and on-demand. LLOOVIA, which can be used with multi-cloud environments, provides two types of solutions: (1) the optimal solution and (2) the approximated solution based on a novel approach that uses binning applied on histograms of load levels. An extensive set of experiments has shown that when the size of the problem is huge, the approximated solution is calculated in a much shorter time and is very close to the optimal one. The technique presented has been applied to a set of case studies, based on the Wikipedia workload. These cases demonstrate that LLOOVIA can handle problems in which hundreds of virtual machines of many different types, multiple providers, and different kinds of limits are used.  相似文献   

4.
Each cloud service provider provides only an interface of its own cloud infrastructure for enabling clients to use its cloud resources. However, there is a number of difficulties for cloud providers to ensure proper functioning. One of the main problems of a cloud provider is the lack of resources to support a huge number of on-demand resources provisioning. Thus, resources cannot be distributed among different cloud providers since the federation is not the basic operation of the cloud provider. The most efficient way to overcome this problem is to extend the interface's cloud provider with an automatic negotiation to dynamically form the best agreement between the different cloud providers based on the service level agreement. In this article, we propose an extension for the Open Cloud Computing Interface which is the standardized interface for the cloud computing to support the automatic negotiation between the different cloud providers. To prove the efficiency and the effectiveness of our approach, we implement a prototype to evaluate the key presented in this article.  相似文献   

5.
针对云计算环境下虚拟机部署问题,提出充分考虑了系统负载均衡的PM-LB虚拟机部署算法。首先,采用性能向量,规范化地描述虚拟基础设施性能状况;然后,通过计算待部署虚拟机和服务器性能向量的相对距离,得到待部署虚拟机的匹配向量;最后,将匹配向量与系统负载向量综合分析,得到虚拟机部署结果。在CloudSim环境下进行了实验仿真,实验结果证明,使用所提算法可获得较好的系统负载均衡效果和较高的资源利用率。  相似文献   

6.
The growing size and complexity of cloud systems determine scalability issues for resource monitoring and management. While most existing solutions consider each Virtual Machine (VM) as a black box with independent characteristics, we embrace a new perspective where VMs with similar behaviors in terms of resource usage are clustered together. We argue that this new approach has the potential to address scalability issues in cloud monitoring and management. In this paper, we propose a technique to cluster VMs starting from the usage of multiple resources, assuming no knowledge of the services executed on them. This innovative technique models VMs behavior exploiting the probability histogram of their resources usage, and performs smoothing-based noise reduction and selection of the most relevant information to consider for the clustering process. Through extensive evaluation, we show that our proposal achieves high and stable performance in terms of automatic VM clustering, and can reduce the monitoring requirements of cloud systems.  相似文献   

7.
Background:Virtual Machine (VM) consolidation is an effective technique to improve resource utilization and reduce energy footprint in cloud data centers. It can be implemented in a centralized or a distributed fashion. Distributed VM consolidation approaches are currently gaining popularity because they are often more scalable than their centralized counterparts and they avoid a single point of failure.Objective:To present a comprehensive, unbiased overview of the state-of-the-art on distributed VM consolidation approaches.Method:A Systematic Mapping Study (SMS) of the existing distributed VM consolidation approaches.Results:19 papers on distributed VM consolidation categorized in a variety of ways. The results show that the existing distributed VM consolidation approaches use four types of algorithms, optimize a number of different objectives, and are often evaluated with experiments involving simulations.Conclusion:There is currently an increasing amount of interest on developing and evaluating novel distributed VM consolidation approaches. A number of research gaps exist where the focus of future research may be directed.  相似文献   

8.
The concept of virtualization is one of the most important technologies to construct a cloud service, and especially hardware virtualization is indispensable for infrastructure as a service (IaaS) where the cloud offering, infrastructure, is usually provided as a pool of virtual machine (VM) instances. For that reason, many public IaaS clouds like Amazon Web Service and private cloud toolkits such as Eucalyptus and OpenStack provide users with methods for managing VM instances via APIs, command‐line tools, web services, and so on. These are, however, not easy to use or customize for the average end users, especially for those in scientific research areas who just want to perform their work on a cloud and do not need to know the underlying technologies that much. Utilizing workflow management systems (WfMSs) in managing VMs on a cloud can alleviate these difficulties. Users only need to describe parameters needed for VMs and enact the workflow on a workflow enactment engine using user‐friendly interfaces. We propose a management scheme for VM instances on a cloud with the WfMS in this paper. We present a preliminary study on integrating cloud and WfMS focusing on management of VM instances and show an early implementation for a proof of concept with detailed explanations and possible usage scenarios. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
云计算环境下的虚拟机快速克隆技术   总被引:1,自引:0,他引:1       下载免费PDF全文
虚拟机克隆技术是指在云计算环境下快速复制出多个虚拟机(VM)并将这些VM分发到多台物理主机上,克隆出来的VM共享相同的初始状态然后独立运行提供服务。虚拟机克隆使得云计算提供商能够快速有效地部署系统资源。给出了一种虚拟机快速克隆方法,利用写时拷贝技术来创建虚拟磁盘和内存状态的快照,然后用按需分配内存技术和多点传送技术来请求和传输这些状态信息。在C3云平台上的实验表明,此方法在不中断源虚拟机中运行服务的情况下,实现了云计算中的快速虚拟机克隆。  相似文献   

10.
现如今,如何在满足截止时间约束的前提下降低工作流的执行成本,是云中工作流调度的主要问题之一。三步列表调度算法可以有效解决这一问题。但该算法在截止时间分配阶段只能形成静态的子截止时间。为方便用户部署工作流任务,云服务商为用户提供了的三种实例类型,其中竞价实例具有非常大的价格优势。为解决上述问题,提出了截止时间动态分配的工作流调度成本优化算法(S-DTDA)。该算法利用粒子群算法对截止时间进行动态分配,弥补了三步列表调度算法的缺陷。在虚拟机选择阶段,该算法在候选资源中增加了竞价实例,大大降低了执行成本。实验结果表明,相较于其他经典算法,该算法在实验成功率和执行成本上具有明显优势。综上所述,S-DTDA算法可以有效解决工作流调度中截止时间约束的成本优化问题。  相似文献   

11.
当前云计算供应商通过定价算法或类似拍卖的算法来分配他们的虚拟机(VM)实例。然而,这些算法大多要求虚拟机静态供应,无法准确预测用户需求,导致资源未得到充分利用。为此,提出了一种基于组合拍卖的虚拟机动态供应和分配算法,在做出虚拟机供应决策时考虑用户对虚拟机的需求。该算法将可用的计算资源看成是“流体”资源,且这些资源根据用户请求可分为不同数量、不同类型的虚拟机实例。然后可根据用户的估价决定分配策略,直到所有资源分配完毕。基于Parallel Workload Archive(并行工作负载存档)的真实工作负载数据进行了仿真实验,结果表明该方法可保证为云供应商带来更高收入,提高资源利用率。  相似文献   

12.
Zoltán Ádám Mann 《Software》2018,48(7):1368-1389
In recent years, many algorithms have been proposed for the optimized allocation of virtual machines in cloud data centers. Such algorithms are usually implemented and evaluated in a cloud simulator. This paper investigates the impact of the choice of cloud simulator on the implementation of the algorithms and on the evaluation results. In particular, we report our experiences with porting an algorithm and its evaluation framework from one simulator (CloudSim) to another (DISSECT‐CF). Our findings include limitations in the design of the simulators and in existing algorithm implementations. Based on this experience, we propose architectural guidelines for the integration of virtual machine allocation algorithms into cloud simulators.  相似文献   

13.
14.
The increasing requirements of big data analytics and complex scientific computing impose significant burdens on cloud data centers. As a result, not only the computation but also the communication expenses in data centers are greatly increased. Previous work on green computing in data centers mainly focused on the energy consumption of the servers rather than the communication. However, for those emerging applications with big data-flows transmission, more energy consumption could be consumed by communication links, switching and aggregation elements. To this end, based on data-flows’ transmission characteristics, we proposes a novel Job-Aware Virtual Machine Placement and Route Scheduling (JAVPRS) scheme to reduce the energy consumption of data center networks (DCN) while still meeting as many network QoS (Quality of Service) requirements as possible. Our proposed scheme focuses on not just migrating large data flows, but also integrating small data flows to improve the utilization rate of the communication links. With more idle switches turned off, DCN’s energy consumption will thus be reduced. Besides the data flows’ migration and integration, the Traffic Engineering (TE) technique is also applied to decrease the transmission delay and increase the network throughput. To evaluate the performance of our proposed scheme, a number of simulation studies are performed. Compared to the selected benchmarks, the simulation results showed that JAVPRS can achieve 22.28%–35.72% energy saving while reducing communication delay by 5.8%–6.8% and improving network throughput by 13.3%.  相似文献   

15.
Recently, a growing number of scientific applications have been migrated into the cloud. To deal with the problems brought by clouds, more and more researchers start to consider multiple optimization goals in workflow scheduling. However, the previous works ignore some details, which are challenging but essential. Most existing multi-objective workflow scheduling algorithms overlook weight selection, which may result in the quality degradation of solutions. Besides, we find that the famous partial critical path (PCP) strategy, which has been widely used to meet the deadline constraint, can not accurately reflect the situation of each time step. Workflow scheduling is an NP-hard problem, so self-optimizing algorithms are more suitable to solve it.In this paper, the aim is to solve a workflow scheduling problem with a deadline constraint. We design a deadline constrained scientific workflow scheduling algorithm based on multi-objective reinforcement learning (RL) called DCMORL. DCMORL uses the Chebyshev scalarization function to scalarize its Q-values. This method is good at choosing weights for objectives. We propose an improved version of the PCP strategy calledMPCP. The sub-deadlines in MPCP regularly update during the scheduling phase, so they can accurately reflect the situation of each time step. The optimization objectives in this paper include minimizing the execution cost and energy consumption within a given deadline. Finally, we use four scientific workflows to compare DCMORL and several representative scheduling algorithms. The results indicate that DCMORL outperforms the above algorithms. As far as we know, it is the first time to apply RL to a deadline constrained workflow scheduling problem.  相似文献   

16.
容器很容易针对Web应用程序提供包装、迁移和配置等服务,近年来已成为研究热点;提出了容器云中基于改进遗传算法的资源分配策略Double-GA;Double-GA是一种包括两个层次的资源分配策略:容器到虚拟机的资源分配和虚拟机到物理主机的资源分配;设计了容器云的两层资源分配的数学模型,以容器云中的整体物理主机能量消耗作为Double-GA策略的目标函数;Double-GA以遗传算法为基础,设计了双染色体的表达方式并处理好了遗传算法的初始化、进化、交叉、变异等操作;真实的实验实例数据结果表明:Double-GA双染色体算法明显优于普通遗传算法GA和递减最好适用算法。  相似文献   

17.
为降低云环境下科学工作流的执行代价,提出了一种执行计划的优化方法。引入猴群算法,依靠对当前执行计划的层内和层间优化,在保证工作流全局截止时间约束的前提下,通过同层任务的逻辑聚合和任务的层间调整,尽可能减少各层任务数的差异,以避免资源的闲置浪费,缩短任务的等待时间。实验表明,该方法与类似研究相比,可降低资源消耗量,减小总的延迟时间。  相似文献   

18.
We address scheduling independent and precedence constrained parallel tasks on multiple homogeneous processors in a data center with dynamically variable voltage and speed as combinatorial optimization problems. We consider the problem of minimizing schedule length with energy consumption constraint and the problem of minimizing energy consumption with schedule length constraint on multiple processors. Our approach is to use level-by-level scheduling algorithms to deal with precedence constraints. We use a simple system partitioning and processor allocation scheme, which always schedules as many parallel tasks as possible for simultaneous execution. We use two heuristic algorithms for scheduling independent parallel tasks in the same level, i.e., SIMPLE and GREEDY. We adopt a two-level energy/time/power allocation scheme, namely, optimal energy/time allocation among levels of tasks and equal power supply to tasks in the same level. Our approach results in significant performance improvement compared with previous algorithms in scheduling independent and precedence constrained parallel tasks.  相似文献   

19.
The scale of global data center market has been explosive in recent years. As the market grows, the demand for fast provisioning of the virtual resources to support elastic, manageable, and economical computing over the cloud becomes high. Fast provisioning of large-scale virtual machines (VMs), in particular, is critical to guarantee quality of service (QoS). In this paper, we systematically review the existing VM provisioning schemes and classify them in three main categories. We discuss the features and research status of each category, and introduce two recent solutions, VMThunder and VMThunder+, both of which can provision hundreds of VMs in seconds.  相似文献   

20.
网格环境的动态性使得科学工作流执行过程中的资源访问控制成为一个重要的研究课题.因此,提出一种基于上下文感知的资源访问控制机制,对科学工作流的任务上下文及其约束进行了分析和定义.描述了基于上下文感知的资源访问控制算法,并在此基础上设计了基于上下文感知的科学工作流管理系统框架.最后,通过天气预报这个科学工作流实例验证了该算法.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号