首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Cloud computing is a popular platform for processing the tasks by utilizing Virtual Machines as executing elements. The problems such as utilization and makespan persist in task scheduling in cloud which has to be solved and hence this article presents a human-inspired approach for solving the job shop scheduling issue in the cloud environment. Since the job shop scheduling is challenging under multicloud environment, this article improves the well-known method which is termed as self-adaptive Brain Storm Optimization scheme. As a result, the recommendation of solutions is improved and so the desired updating is done. With this context, the scheduling process is performed. Here, the allocation of jobs for resources of heterogeneous cloud is encoded as brain storming process. Furthermore, the resultant scheduling scheme is evaluated for different performance constraints such as resource utilization rate, job completion, and makes span and the outcomes are verified. Next, to the implementation, the proposed model is compared with BSO, Particle Swarm Optimization, Genetic Algorithm, and Differential Evolution and the analysis proves its better performance.  相似文献   

2.
Cloud computing allows execution and deployment of different types of applications such as interactive databases or web-based services which require distinctive types of resources. These applications lease cloud resources for a considerably long period and usually occupy various resources to maintain a high quality of service (QoS) factor. On the other hand, general big data batch processing workloads are less QoS-sensitive and require massively parallel cloud resources for short period. Despite the elasticity feature of cloud computing, fine-scale characteristics of cloud-based applications may cause temporal low resource utilization in the cloud computing systems, while process-intensive highly utilized workload suffers from performance issues. Therefore, ability of utilization efficient scheduling of heterogeneous workload is one challenging issue for cloud owners. In this paper, addressing the heterogeneity issue impact on low utilization of cloud computing system, conjunct resource allocation scheme of cloud applications and processing jobs is presented to enhance the cloud utilization. The main idea behind this paper is to apply processing jobs and cloud applications jointly in a preemptive way. However, utilization efficient resource allocation requires exact modeling of workloads. So, first, a novel methodology to model the processing jobs and other cloud applications is proposed. Such jobs are modeled as a collection of parallel and sequential tasks in a Markovian process. This enables us to analyze and calculate the efficient resources required to serve the tasks. The next step makes use of the proposed model to develop a preemptive scheduling algorithm for the processing jobs in order to improve resource utilization and its associated costs in the cloud computing system. Accordingly, a preemption-based resource allocation architecture is proposed to effectively and efficiently utilize the idle reserved resources for the processing jobs in the cloud paradigms. Then, performance metrics such as service time for the processing jobs are investigated. The accuracy of the proposed analytical model and scheduling analysis is verified through simulations and experimental results. The simulation and experimental results also shed light on the achievable QoS level for the preemptively allocated processing jobs.  相似文献   

3.
Cloud datacenters host hundreds of thousands of physical servers that offer computing resources for executing customer jobs. While the failures of these physical machines are considered normal rather than exceptional, in large-scale distributed systems and cloud datacenters evaluation of availability in a datacenter is essential for both cloud providers and customers. Although providing a highly available and reliable computing infrastructure is essential to maintaining customer confidence, cloud providers desire to have highly utilized datacenters to increase the profit level of delivered services. Cloud computing architectural solutions should thus take into consideration both high availability for customers and highly utilized resources to make delivering services more profitable for cloud providers. This paper presents a highly reliable cloud architecture by leveraging the 80/20 rule. This architecture uses the 80/20 rule (80% of cluster failures come from 20% of physical machines) to identify failure-prone physical machines by dividing each cluster into reliable and risky sub-clusters. Furthermore, customer jobs are divided into latency-sensitive and latency-insensitive types. The results showed that only about 1% of all requested jobs are extreme latency-sensitive and require availability of 99.999%. By offering services to revenue-generating jobs, which are less than 50% of all requested jobs, within the reliable subcluster of physical machines, cloud providers can make their businesses more profitable by preventing service level agreement violation penalties and improving their reputations.  相似文献   

4.
云资源调度是云数据中心的一种重要节能方式。然而,实际云平台中,受单一物理机资源限制,存在虚拟机资源竞争和利用率低的问题。对此,通过分析虚拟机负载相似性及资源占有度问题,提出一种基于三支决策的能耗感知虚拟机迁移策略。首先,在虚拟机迁移过程中,设计云资源的三支划分策略,并使用K-means算法在划分区域选择待迁移的虚拟机序列;其次,依据虚拟机与物理机的负载相似度,获取虚拟机放置顺序;最后,依托CloudSimPlus云仿真平台验证了所提方法的有效性。实验结果表明,所提方法能够有效降低云能耗,实现资源充分利用。  相似文献   

5.
李昆仑  王珺  宋健  董庆运 《软件学报》2015,26(S2):78-89
针对云计算中一些现有的基于批量调度模式和进化算法的动态云任务调度算法计算量较大,计算时间成本较高的现象,提出了一种基于改进基因表达式编程(GEP)和资源改变量的局部云任务调度算法.首先结合云任务调度的特点对普通GEP算法做出了相应的改进,然后采用加权求和的方式构造了一个基于综合利用率和能耗的适应度函数,最后依据物理机综合利用率的差异给出了基于改进GEP和资源改变量的局部云任务调度算法.基于资源改变量的局部云任务调度算法,通过对任务运行情况和物理资源使用情况进行监控,合理设定阈值,以减少参与调度物理机的个数,从而降低任务调度算法的时间成本.基于RH(rolling horizon)模型,通过实验将所提出的算法与普通遗传算法、全局GEP算法进行了比较,可知该算法不仅可以降低寻优时间,不易陷入局部最优解,且具有较快的收敛速度.  相似文献   

6.
针对云计算数据中心的能耗问题,提出了绿色云计算体系理论,设计了绿色云系统架构;基于该架构,将能量作为一种系统资源进行分配,提出了三种绿色任务调度算法分别是STF-OS、LTF-OS和RT-OS算法;对三种绿色任务调度算法可行性做了相关的理论分析,三种算法可以有效地减少能源消耗;通过扩展云计算仿真平台CloudSim实现了模拟实验,结果表明STF-OS算法降低数据中心能耗的能力最优。  相似文献   

7.
林伟伟  吴文泰 《软件学报》2016,27(4):1026-1041
云计算引领了计算机科学的一场重大变革,但与此同时,也不可避免地带来了日益凸显的能源消耗问题,因此,云计算能耗管理成为近几年的研究热点.云计算系统的能耗测量和管理直接关系到云计算的可持续发展,能耗数据不仅关系到能耗模型的建立,而且也是检验云计算资源调度算法的基础.为此,在广泛研究现有能耗测量方法的基础上,归纳总结了当前云计算环境的4种能耗测量方法:基于软件或硬件的直接测量方法、基于能耗模型的估算方法、基于虚拟化技术的能耗测量方法、基于仿真的能耗评估方法,并分析和比较了它们的优势、缺陷和适用环境.在此基础上,指出了云计算能耗管理的未来重要研究趋势:智能主机电源模块、面向不同类型应用的能耗模型、混合任务负载的能耗模型、可动态管理的高效云仿真工具、动态异构分布式集群的能耗管理、面向大数据分析处理和任务调度的节能方法以及新能源供电环境下的节能规划,为云计算节能领域的研究指明了方向.  相似文献   

8.
云计算系统中数据中心的节能算法研究   总被引:3,自引:0,他引:3  
简要介绍了云计算的定义和特点,重点研究了云计算数据中心的高能耗问题,对目前的节能算法进行了分类,重点综述了基于DVFS的节能算法、基于虚拟化的节能算法以及基于主机关闭/开启的节能算法,并对算法的优缺点和适用环境作了比较分析。最后总结了云计算数据中心的能耗管理中进一步的研究难题。  相似文献   

9.
Cloud computing is a form of distributed computing, which promises to deliver reliable services through next‐generation data centers that are built on virtualized compute and storage technologies. It is becoming truly ubiquitous and with cloud infrastructures becoming essential components for providing Internet services, there is an increase in energy‐hungry data centers deployed by cloud providers. As cloud providers often rely on large data centers to offer the resources required by the users, the energy consumed by cloud infrastructures has become a key environmental and economical concern. Much energy is wasted in these data centers because of under‐utilized resources hence contributing to global warming. To conserve energy, these under‐utilized resources need to be efficiently utilized and to achieve this, jobs need to be allocated to the cloud resources in such a way so that the resources are used efficiently and there is a gain in performance and energy efficiency. In this paper, a model for energy‐aware resource utilization technique has been proposed to efficiently manage cloud resources and enhance their utilization. It further helps in reducing the energy consumption of clouds by using server consolidation through virtualization without degrading the performance of users’ applications. An artificial bee colony based energy‐aware resource utilization technique corresponding to the model has been designed to allocate jobs to the resources in a cloud environment. The performance of the proposed algorithm has been evaluated with the existing algorithms through the CloudSim toolkit. The experimental results demonstrate that the proposed technique outperforms the existing techniques by minimizing energy consumption and execution time of applications submitted to the cloud. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.

Cloud computing infrastructures have intended to provide computing services to end-users through the internet in a pay-per-use model. The extensive deployment of the Cloud and continuous increment in the capacity and utilization of data centers (DC) leads to massive power consumption. This intensifying scale of DCs has made energy consumption a critical concern. This paper emphasizes the task scheduling algorithm by formulating the system model to minimize the makespan and energy consumption incurred in a data center. Also, an energy-aware task scheduling in the Blockchain-based data center was proposed to offer an optimal solution that minimizes makespan and energy consumption. The established model was analyzed with a target-time responsive precedence scheduling algorithm. The observations were analyzed and compared with the traditional scheduling algorithms. The outcomes exhibited that the developed solution incurs better performance with a response to resource utilization and decreasing energy consumption. The investigation revealed that the applied strategy considerably enhanced the effectiveness of the designed schedule.

  相似文献   

11.
随机任务在云计算平台中能耗的优化管理方法   总被引:5,自引:0,他引:5  
谭一鸣  曾国荪  王伟 《软件学报》2012,23(2):266-278
针对云计算系统在运行过程中由于计算节点空闲而产生大量空闲能耗,以及由于不匹配任务调度而产生大量“奢侈”能耗的能耗浪费问题,提出一种通过任务调度方式的能耗优化管理方法.首先,用排队模型对云计算系统进行建模,分析云计算系统的平均响应时间和平均功率,建立云计算系统的能耗模型.然后提出基于大服务强度和小执行能耗的任务调度策略,分别针对空闲能耗和“奢侈”能耗进行优化控制.基于该调度策略,设计满足性能约束的最小期望执行能耗调度算法ME3PC(minimum expectation execution energy with performance constraints).实验结果表明,该算法在保证执行性能的前提下,可大幅度降低云计算系统的能耗开销.  相似文献   

12.
In its simplest structure, cloud computing technology is a massive collection of connected servers residing in a datacenter and continuously changing to provide services to users on-demand through a front-end interface. The failure of task during execution is no more an accident but a frequent attribute of scheduling systems in a large-scale distributed environment. Recently, some computational intelligence techniques have been mostly utilized to decipher the problems of scheduling in the cloud environment, but only a few emphasis on the issue of fault tolerance. This research paper puts forward a Checkpointed League Championship Algorithm (CPLCA) scheduling scheme to be used in the cloud computing system. It is a fault-tolerance aware task scheduling mechanisms using the checkpointing strategy in addition to tasks migration against unexpected independent task execution failure. The simulation results show that, the proposed CPLCA scheme produces an improvement of 41%, 33% and 23% as compared with the Ant Colony Optimization (ACO), Genetic Algorithm (GA) and the basic league championship algorithm (LCA) respectively as parametrically measured using the total average makespan of the schemes. Considering the total average response time of the schemes, the CPLCA scheme produces an improvement of 54%, 57% and 30% as compared with ACO, GA and LCA respectively. It also turns out significant failure decrease in jobs execution as measured in terms of failure metrics and performance improvement rate. From the results obtained, CPLCA provides an improvement in both tasks scheduling performance and failure awareness that is more appropriate for scheduling in the cloud computing model.  相似文献   

13.
陈暄  赵文君  龙丹 《计算机应用研究》2021,38(3):751-754,781
针对移动云计算环境下的任务调度存在耗时长、设备能耗高的问题,提出了一种基于改进的鸟群算法(improved bird swarm algorithm,IBSA)的任务调度策略。首先,构建了以能耗和时间为主的移动云任务调度模型;其次,提出了自适应感知系数和社会系数,避免了算法陷入局部最优;构建了学习因子优化飞行行为,保证了个体寻优能力;最后,任务调度目标函数作为鸟群个体的适应度函数参与算法的迭代更新。仿真结果表明相比于蚁群算法、粒子群算法、鲸鱼算法等,改进的鸟群算法在移动云计算任务调度方面具有良好的效果,能够有效地节省时间和降低能耗。  相似文献   

14.
工作流任务执行时带来的高能耗不仅会增加云资源提供方的经济成本,而且会降低云系统的可靠性。为了满足截止时间的同时,降低工作流执行能耗,提出一种工作流能效调度算法CWEES。算法将能效优化调度划分为三个阶段:初始任务映射、处理器资源合并和任务松驰。初始任务映射旨在通过任务自底向上分级排序得到任务调度初始序列,处理器资源合并旨在通过重用松驰时间合并相对低效率的处理器,降低资源使用数量,任务松驰旨在为每个任务重新选择带有合适电压/频率等级的最优目标资源,在不违背任务顺序和截止时间约束前提下降低工作流执行总能耗。通过随机工作任务模型对算法的性能进行了仿真实验分析。结果表明,CWEES算法不仅资源利用率更高,而且可以在满足截止时间约束下降低工作流执行能耗,实现执行效率与能耗的均衡。  相似文献   

15.
Cloud computing as a promising technology and paradigm can provide various data services, such as data sharing and distribution, which allows users to derive benefits without the need for deep knowledge about them. However, the popular cloud data services also bring forth many new data security and privacy challenges. Cloud service provider untrusted, outsourced data security, hence collusion attacks from cloud service providers and data users become extremely challenging issues. To resolve these issues, we design the basic parts of secure re‐encryption scheme for data services in a cloud computing environment, and further propose an efficient and secure re‐encryption algorithm based on the EIGamal algorithm, to satisfy basic security requirements. The proposed scheme not only makes full use of the powerful processing ability of cloud computing but also can effectively ensure cloud data security. Extensive analysis shows that our proposed scheme is highly efficient and provably secure under existing security model. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
服务器执行任务产生的能耗是云计算系统动态能耗的重要组成部分。为降低云计算系统任务执行的总能耗,提出了一种基于能耗优化的最早完成时间任务调度方法,建立了服务器动态功率计算模型,基于动态功率的服务器执行能耗模型,以及云计算系统的能耗优化模型。调度策略根据任务的截止时间要求和在不同服务器上的执行能耗,选择不同的调度算法,以获得最小任务执行总能耗。实验结果证明,提出的任务调度方法,能够较好地满足任务截止时间的要求,降低云计算系统任务执行的总能耗。  相似文献   

17.
云计算是新的一种面向市场的商业计算模式,向用户按需提供服务,云计算的商业特性使其关注向用户提供服务的服务质量。任务调度和资源分配是云计算中两个关键的技术,所使用的虚拟化技术使得其资源分配和任务调度有别于以往的并行分布式计算。目前主要的调度算法是借鉴网格环境下的调度策略,研究基于QoS的调度算法,存在执行效率较低的问题。我们对云工作流任务层调度进行深入研究,分析由底层资源虚拟化形成的虚拟机的特性,结合工作流任务的各类QoS约束,提出了基于虚拟机分时特性的任务层ACS调度算法。经过试验,我们提出的算法相比于文献[1]中的算法在对于较多并行任务的执行上存在较大的优势,能够很好的利用虚拟的分时特性,优化任务到虚拟机的调度。  相似文献   

18.
This paper focuses on a bi-objective experimental evaluation of online scheduling in the Infrastructure as a Service model of Cloud computing regarding income and power consumption objectives. In this model, customers have the choice between different service levels. Each service level is associated with a price per unit of job execution time, and a slack factor that determines the maximal time span to deliver the requested amount of computing resources. The system, via the scheduling algorithms, is responsible to guarantee the corresponding quality of service for all accepted jobs. Since we do not consider any optimistic scheduling approach, a job cannot be accepted if its service guarantee will not be observed assuming that all accepted jobs receive the requested resources. In this article, we analyze several scheduling algorithms with different cloud configurations and workloads, considering the maximization of the provider income and minimization of the total power consumption of a schedule. We distinguish algorithms depending on the type and amount of information they require: knowledge free, energy-aware, and speed-aware. First, to provide effective guidance in choosing a good strategy, we present a joint analysis of two conflicting goals based on the degradation in performance. The study addresses the behavior of each strategy under each metric. We assess the performance of different scheduling algorithms by determining a set of non-dominated solutions that approximate the Pareto optimal set. We use a set coverage metric to compare the scheduling algorithms in terms of Pareto dominance. We claim that a rather simple scheduling approach can provide the best energy and income trade-offs. This scheduling algorithm performs well in different scenarios with a variety of workloads and cloud configurations.  相似文献   

19.
Holistic datacenter energy minimization operation should consider interactions between computing and cooling source specific usage patterns. Decisions like workload type, server configuration, load, utilization etc., contributes to power consumption and influences datacenter's thermal profile and impacts the energy required to control temperature within operational thresholds. In this paper, we present an adaptive virtual machine placement and consolidation approach to improve energy efficiency of a cloud datacenter; accounting for server heterogeneity, server processor low-power SLEEP state, state transition latency and integrated thermal controls to maintain datacenter within operational temperature. Our proposed heuristic approach reduces energy consumption with acceptable level of performance.  相似文献   

20.
边缘计算(Edge Computing,EC)作为云计算的补充,在处理lOT设备产生的计算任务时可以保证计算的延时符合系统的要求。针对在传统卸载场景中,由于计算任务到达存在空窗期导致异地边缘云存在空闲状态,造成异地边缘云利用不充分的问题,文中提出了一种基于遗传算法的多边缘与云端协同计算卸载模型(Genetic Algorithm-based Multi-edge Collaborative Computing Offloading Model,GAMCCOM)。该计算卸载方案联合本地边缘和异地边缘进行任务卸载,并采用遗传算法进行求解,从而得到同时考虑时延和能耗的最小的系统代价。通过仿真实验结果可知,在综合考虑卸载系统的时延消耗和能量消耗的情况下,该方案相比基本的三层卸载方案系统整体代价降低了23%,在只考虑时延消耗和只考虑能量消耗的情况下依然分别能够降低系统代价17%和15%。因此针对边缘计算的不同卸载目标,GAMCCOM卸载方案对系统代价均有比较优秀的降低效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号