首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
云计算所提供的服务面向庞大的用户群,随着节点规模的扩大、任务执行时间的增长,云计算的故障率越来越高。为此,提出基于任务备份的云计算容错调度算法。将任务映射到含有该任务输入数据且负载最小的节点,根据云计算的安全等级将任务进行备份,并重新调度失败任务。仿真实验结果表明,该算法具有较好的容错性,任务调度成功率达到99%。  相似文献   

2.
In its simplest structure, cloud computing technology is a massive collection of connected servers residing in a datacenter and continuously changing to provide services to users on-demand through a front-end interface. The failure of task during execution is no more an accident but a frequent attribute of scheduling systems in a large-scale distributed environment. Recently, some computational intelligence techniques have been mostly utilized to decipher the problems of scheduling in the cloud environment, but only a few emphasis on the issue of fault tolerance. This research paper puts forward a Checkpointed League Championship Algorithm (CPLCA) scheduling scheme to be used in the cloud computing system. It is a fault-tolerance aware task scheduling mechanisms using the checkpointing strategy in addition to tasks migration against unexpected independent task execution failure. The simulation results show that, the proposed CPLCA scheme produces an improvement of 41%, 33% and 23% as compared with the Ant Colony Optimization (ACO), Genetic Algorithm (GA) and the basic league championship algorithm (LCA) respectively as parametrically measured using the total average makespan of the schemes. Considering the total average response time of the schemes, the CPLCA scheme produces an improvement of 54%, 57% and 30% as compared with ACO, GA and LCA respectively. It also turns out significant failure decrease in jobs execution as measured in terms of failure metrics and performance improvement rate. From the results obtained, CPLCA provides an improvement in both tasks scheduling performance and failure awareness that is more appropriate for scheduling in the cloud computing model.  相似文献   

3.
当今云计算环境下,Hadoop已经成为大数据处理的事实标准。然而云计算具有大规模、高复杂和动态性的特点,容易导致故障的发生,影响Hadoop上运行的作业。虽然Hadoop具有内置的故障检测和恢复机制,但云环境中不同节点负载大小的变化,被调度的作业仍然导致失败。针对此问题提出自响应故障感知的检测调度方法,对异构环境负载能力的不同,而做出服务器快节点和慢节点的判断,把作业分配调度到合适的节点上执行,调整任务决策来尽可能的防止任务失败的发生。最后在Hadoop框架下与基本调度器进行实验性能比较,结果显示该方法减少作业失败率最高达19%,并缩短了作业执行时间,同时也减少CPU和内存的使用。  相似文献   

4.
针对提高异构云平台中资源调度的效率,提出了一种基于任务和资源分簇的异构云计算平台任务调度方案。利用K-means算法,根据任务的CPU和I/O处理时间对任务分簇,根据资源的计算能力对资源分簇;然后,将任务簇对应到合适的资源簇,并利用最早截止时间优先(EDF)算法对任务簇中的独立任务进行调度,利用提出的改进型最小关键路径(MCP)算法对依赖性任务进行调度。实验结果表明,在资源异构的云计算环境中,该方案执行任务时间短、能耗低。  相似文献   

5.
Cloud computing is an Information Technology deployment model established on virtualization. Task scheduling states the set of rules for task allocations to an exact virtual machine in the cloud computing environment. However, task scheduling challenges such as optimal task scheduling performance solutions, are addressed in cloud computing. First, the cloud computing performance due to task scheduling is improved by proposing a Dynamic Weighted Round-Robin algorithm. This recommended DWRR algorithm improves the task scheduling performance by considering resource competencies, task priorities, and length. Second, a heuristic algorithm called Hybrid Particle Swarm Parallel Ant Colony Optimization is proposed to solve the task execution delay problem in DWRR based task scheduling. In the end, a fuzzy logic system is designed for HPSPACO that expands task scheduling in the cloud environment. A fuzzy method is proposed for the inertia weight update of the PSO and pheromone trails update of the PACO. Thus, the proposed Fuzzy Hybrid Particle Swarm Parallel Ant Colony Optimization on cloud computing achieves improved task scheduling by minimizing the execution and waiting time, system throughput, and maximizing resource utilization.  相似文献   

6.
ABSTRACT

Not long ago, there has been a dramatic augment in the attractiveness of cloud computing systems that depends computing resources on-demand, bill on a pay-as-you-go basis, and multiplex many users on the same physical infrastructure. It is considered as an essential pool of resources, which are offered to users through Internet. Without troubling the fundamental infrastructure, pay-per-use computing resources are provided to the users by the cloud computing technology. Scheduling is a significant dilemma in cloud computing as a cloud provider has to serve multiple users in cloud environment. This proposal plans to implement an optimal task scheduling model in cloud sector as a challenge over the existing technologies. The proposed model solves the task scheduling problem using an improved meta-heuristic algorithm called Fitness Rate-based Rider Optimization Algorithm (FR-ROA), which is the advanced form of conventional Rider Optimization Algorithm (ROA). The objective constraints considered for optimal task scheduling are the maximum makespan or completion time, and the sum of the completion times of entire tasks. Since the proposed FR-ROA has attained the advantageous part of reaching the convergence in a small duration, the proposed model will outperform the other conventional algorithms for accomplishing the optimal task scheduling in cloud environment.  相似文献   

7.

Mobile cloud computing is a form of cloud computing that incorporates mobile devices such as smartphones and tablet PCs into the cloud infrastructure. As mobile devices are resource-constrained in nature, new scheduling strategies are required when using them as resource providers. Based on our previous group-based scheduling algorithm, we present fault-tolerant scheduling algorithms considering checkpoint and replication mechanisms to actively cope with faults. We carried out the performance evaluation with simulation to demonstrate that our algorithm is more efficient than the existing one lacking fault tolerance in terms of accuracy rate, resource consumption, and average execution time. In particular, the average execution time was reduced by about 60%, resulting in the reduction of resource consumption.

  相似文献   

8.
Mobile edge cloud computing has been a promising computing paradigm, where mobile users could offload their application workloads to low‐latency local edge cloud resources. However, compared with remote public cloud resources, conventional local edge cloud resources are limited in computation capacity, especially when serve large number of mobile applications. To deal with this problem, we present a hierarchical edge cloud architecture to integrate the local edge clouds and public clouds so as to improve the performance and scalability of scheduling problem for mobile applications. Besides, to achieve a trade‐off between the cost and system delay, a fault‐tolerant dynamic resource scheduling method is proposed to address the scheduling problem in mobile edge cloud computing. The optimization problem could be formulated to minimize the application cost with the user‐defined deadline satisfied. Specifically, firstly, a game‐theoretic scheduling mechanism is adopted for resource provisioning and scheduling for multiprovider mobile applications. Then, a mobility‐aware dynamic scheduling strategy is presented to update the scheduling with the consideration of mobility of mobile users. Moreover, a failure recovery mechanism is proposed to deal with the uncertainties during the execution of mobile applications. Finally, experiments are designed and conducted to validate the effectiveness of our proposal. The experimental results show that our method could achieve a trade‐off between the cost and system delay.  相似文献   

9.
曹洁  曾国荪 《计算机应用》2015,35(3):648-653
云环境中的处理机故障已成为云计算不可忽视的问题,容错成为设计和发展云计算系统的关键需求。针对一些容错调度算法在任务调度过程中调度效率低下以及任务类型单一的问题,提出一种处理机和任务主副版本分组的容错调度方法;并给出了副版本可重叠执行的判定方法,以及任务最坏响应时间的计算公式。通过实验和分析表明,和以前算法相比,将处理机分成两组分别执行任务主版本和任务副版本,减少了任务调度所需进行可调度测试的时间,增加了副版本重叠执行的机会,减少了所需的处理机个数,对提高系统处理机的利用率和容错调度的效率具有重要的意义。  相似文献   

10.
为缩短云计算中任务调度过程任务等待时间及提高虚拟机任务调度系统的执行效率,提出一种云环境下基于 排队系统的任务调度模型。对该模型中系统稳态分布和条件随机分解结果进行了分析,给出该模型的稳态队长的随机分解和稳态等待时间,结合数值例子,准确的找到服务率与期望队长、期望等待时间及其它性能指标之间的关系。通过云任务调度系统的仿真,实验结果验证了该模型能够快速地完成云任务的调度,提高了虚拟机资源的平均利用率。  相似文献   

11.
赵璞  肖人彬 《控制与决策》2023,38(5):1352-1362
针对边缘计算环境中,边缘设备的计算和存储资源有限的问题,探讨高效的边云协同任务调度和资源缓存策略,研究自组织劳动分工群智能算法模型机理,并以此为基础,提出基于蜂群劳动分工“激发-抑制”模型的边云协同任务调度算法(edge cloud collaborative task scheduling algorithm based on bee colony labor division‘activator-inhibitor’ model, ECCTS-BCLDAI)和基于蚁群劳动分工“刺激-响应”模型的边云协同资源缓存算法(edge cloud collaborative resource caching algorithm based on ant colony labor division ‘stimulus-response’ model,ECCRC-ACLDSR).仿真实验结果表明:所提出的ECCTS-BCLDAI任务调度算法在降低平均任务执行时长、减少边云协同费用上相较于传统算法有更好的表现;所提出的ECCRC-ACLDSR资源缓存算法在降低任务平均时长、优化网络带宽占用率、减少...  相似文献   

12.
罗慧兰 《计算机测量与控制》2017,25(12):150-152, 176
为缩短云计算执行时间,改善云计算性能,在一定程度上加强云计算资源节点完成任务成功率,需要对云计算资源进行调度;当前的云计算资源调度算法在进行调度时,通过选择合适的调度参数并利用CloudSim仿真工具,完成对云计算资源的调度;该算法在运行时无法有效地进行平衡负载,导致云计算资源调度的均衡性能较差,存在云计算资源调度结果误差大的问题;为此,提出一种基于Wi-Fi与Web的云计算资源调度算法;该算法首先利用自适应级联滤波算法对云计算资源数据流进行滤波降噪,然后以降噪结果为基础,采用本体论对云计算资源进行预处理操作,最后通过人工蜂群算法完成对云计算资源的调度;实验结果证明,所提算法可以良好地应用于云计算资源调度中,有效提高了云计算资源利用率,具有实用性以及可实践性,为该领域的后续研究发展提供了可靠支撑。  相似文献   

13.
In order to optimize the quality of service (QoS) and execution time of task, a new resource scheduling based on improved particle swarm optimization (IPSO) is proposed to improve the efficiency and superiority. In cloud computing, the first principle of resource scheduling is to meet the needs of users, and the goal is to optimize the resource scheduling scheme and maximize the overall efficiency. This requires that the scheduling of cloud computing resources should be flexible, real-time and efficient. In this way, the mass resources of cloud computing can effectively meet the needs of the cloud users. Field Programmable Gate Arrays (FPGA), high performance and energy efficiency in one field. Most of them would have been the particle algorithm. The current technological development is still in-depth at super-resolution image research at an unprecedentedly fast pace. In particular, systemic origin applications get a lot of attention because they have a wide range of abnormal results. The scientific resource scheduling algorithm is the key to improve the efficiency of cloud computing resources distribution and the level of cloud services. In addition, the physical model of cloud computing resource scheduling is established. The performance of the IPSO algorithm applied to cloud computing resource scheduling is analysed in the design experiment. The comparison result shows that the new algorithm improves the PSO by taking full account of the user's Qu's requirements and the load balance of the cloud environment. In conclusion, the research on cloud computing resource scheduling based on IPSO can solve the problem of resource scheduling to a certain extent.  相似文献   

14.
Adaptive checkpointing strategy to tolerate faults in economy based grid   总被引:3,自引:2,他引:1  
In this paper, we develop a fault tolerant job scheduling strategy in order to tolerate faults gracefully in an economy based grid environment. We propose a novel adaptive task checkpointing based fault tolerant job scheduling strategy for an economy based grid. The proposed strategy maintains a fault index of grid resources. It dynamically updates the fault index based on successful or unsuccessful completion of an assigned task. Whenever a grid resource broker has tasks to schedule on grid resources, it makes use of the fault index from the fault tolerant schedule manager in addition to using a time optimization heuristic. While scheduling a grid job on a grid resource, the resource broker uses fault index to apply different intensity of task checkpointing (inserting checkpoints in a task at different intervals). To simulate and evaluate the performance of the proposed strategy, this paper enhances the GridSim Toolkit-4.0 to exhibit fault tolerance related behavior. We also compare “checkpointing fault tolerant job scheduling strategy” with the well-known time optimization heuristic in an economy based grid environment. From the measured results, we conclude that even in the presence of faults, the proposed strategy effectively schedules grid jobs tolerating faults gracefully and executes more jobs successfully within the specified deadline and allotted budget. It also improves the overall execution time and minimizes the execution cost of grid jobs.  相似文献   

15.

Providing required level of service quality in cloud computing is one of the most significant cloud computing challenges because of software and hardware complexities, different features of tasks and computing resources and also, lack of appropriate distribution of tasks in cloud computing environments. The recent research in this field show that lack of smart prioritization and ordering of tasks in scheduling (as an NP-hard problem) has been very effective and resulted in lack of load balancing, response time increase, total execution time increase and also, average resource use decrease. In line with this, the proposed method of this research called LATOC considered first the key criteria of an input task like required processing unit, data length of task and execution time. Then, it addressed task prioritization in separate queues using the technique for order preference by similarity to ideal solution (TOPSIS) and analytic hierarchy process (AHP) in figure of a hybrid intelligent algorithm (AHP-TOPSIS). Each ordered task in separate priority queues was placed based on its priority level, and then, to assign each task from each priority queue to virtual machines, optimized particle swarm optimization was used. Many simulations based on various scenarios in Cloudsim simulator show that smart assignment of prioritized tasks by LATOC resulted in improvement of important cloud computing parameters such as total execution time and average resource use comparing similar methods.

  相似文献   

16.
This paper proposes a scheduling algorithm to solve the problem of task scheduling in a cloud computing system with time‐varying communication conditions. This algorithm converts the scheduling problem with communication changes into a directed acyclic graph (DAG) scheduling problem for existing fuzzy communication task nodes, that is, the scheduling problem for a communication‐change DAG (CC‐DAG). The CC‐DAG contains both computation task nodes and communication task nodes. First, this paper proposes a weighted time‐series network bandwidth model to solve the indefinite processing time (cost) problem for a fuzzy communication task node. This model can accurately predict the processing time of a fuzzy communication task node. Second, to address the scheduling order problem for the computation task nodes, a dynamic pre‐scheduling search strategy (DPSS) is proposed. This strategy computes the essential paths for the pre‐scheduling of the computation task nodes based on the actual computation costs (times) of the computation task nodes and the predicted processing costs (times) of the fuzzy communication task nodes during the scheduling process. The computation task node with the longest essential path is scheduled first because its completion time directly influences the completion time of the task graph. Finally, we demonstrate the proposed algorithm via simulation experiments. The experimental results show that the proposed DPSS produced remarkable performance improvement rate on the total execution time that ranges between 11.5% and 21.2%. In view of the experimental results, the proposed algorithm provides better quality scheduling solution that is suitable for scientific application task execution in the cloud computing environment than HEFT, PEFT, and CEFT algorithms.  相似文献   

17.
Cloud computing is becoming a profitable technology because of it offers cost-effective IT solutions globally. A well-designed task scheduling algorithm ensures the optimal utilization of clouds resources and reducing execution time dynamically. This research article deals with the task scheduling of inter-dependent subtasks on unrelated parallel computing machines in a cloud computing environment. This article considers two variants of the problem-based on two different objective function values. The first variant considers the minimization of the total completion time objective function while the second variant considers the minimization of the makespan objective function. Heuristic and meta-heuristic (HEART) based algorithms are proposed to solve the task scheduling problems. These algorithms utilize the property of list scheduling algorithm of unrelated parallel machine scheduling problem. A mixed integer linear programming (MILP) formulation has been provided for the two variants of the problem. The optimal solution is obtained by solving MILP formulation using A Mathematical Programming Language (AMPL) software. Extensive numerical experiments have been performed to evaluate the performance of proposed algorithms. The solutions obtained by the proposed algorithms are found to out-perform the existing algorithms. The proposed algorithms can be used by cloud computing service providers (CCSPs) for enhancing their resources utilization to reduce their operating cost.  相似文献   

18.
服务器执行任务产生的能耗是云计算系统动态能耗的重要组成部分。为降低云计算系统任务执行的总能耗,提出了一种基于能耗优化的最早完成时间任务调度方法,建立了服务器动态功率计算模型,基于动态功率的服务器执行能耗模型,以及云计算系统的能耗优化模型。调度策略根据任务的截止时间要求和在不同服务器上的执行能耗,选择不同的调度算法,以获得最小任务执行总能耗。实验结果证明,提出的任务调度方法,能够较好地满足任务截止时间的要求,降低云计算系统任务执行的总能耗。  相似文献   

19.
随着大数据分析处理需求日益复杂,分析处理过程的表达需要转变为依据任务以及任务间依赖关系构建的大数据工作流的形式,以实现其结构化、可重复、可控制、可扩展以及自动化执行,大数据工作流的编排管理成为重要的研究课题,云计算环境下资源的异构性使得该问题变得更为复杂。本文首先将云环境下大数据工作流编排管理研究划分为大数据工作流构建、工作流划分、任务调度与执行以及容错处理4个方面,并在此基础上进行综述,列举并介绍各个方面近年来经典的、关注度较高的研究;然后,针对研究中的主流技术进行分类与梳理,对各项研究中提出的方法及其特性、优势、待改进项等方面进行分析;最后,将视角回归至大数据分析处理系统,分类分析各项研究给系统带来的收益。  相似文献   

20.
针对云计算中现有调度算法为追求最短完成时间而不能很好兼顾负载平衡的问题,提出基于预先分类的Min-Min调度算法,该算法先利用能衡量资源计算和通信能力的属性信息对资源进行划分等级,再求出每个调度任务在资源中的最小执行时间,计算任务对应资源等级与最小执行时间的乘积,使用该乘积最小的任务-资源对进行调度.解决了原始Min-Min调度算法负载不均衡的问题,兼顾了执行时间最小和负载均衡.模拟的云仿真系统实验结果表明,该算法在平均任务响应时间、平均任务执行速度下降比和系统利用率等方面优于原始的Min-Min调度算法.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号