首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
针对云密码服务系统中服务请求多样、数据依赖性作业流与非数据依赖性作业流随机交叉并发等问题,为了避免处理节点之间关联数据的交互而带来的系统通信性能开销和数据安全性威胁,设计一种基于关联数据本地化的云密码作业流调度算法。首先通过任务请求密码功能的映射,保障多作业流请求密码功能的正确实现;然后对于具有相同请求密码功能的各任务中不同工作模式交叉问题,在提出任务优先级计算方法以促进多作业流调度公平性的基础上,采用分类调度的方法,在实现关联数据本地化的同时,保障了调度系统的整体性能。仿真结果表明,该算法不仅可以有效减少系统任务完成时间,提高资源利用率和公平性,并且具有良好的稳定性。  相似文献   

2.
服务器集群中的负载均衡和作业调度是影响系统性能的重要因素.本文描述服务器集群批量任务的作业调度问题,对该问题建立了基于图的模型.由于使用一般的启发式算法或动态规划算法解决该问题具有局限性,本文引入蚁群算法进行求解,并针对该问题具体求解提出了启发式距离合适的计算方法.最后在仿真的基础上,讨论了算法的优化效果和收敛性,结果表明蚁群算法解决该问题具有优异的性能.  相似文献   

3.
基于改进蚁群算法的云计算任务调度模型   总被引:2,自引:0,他引:2  
为解决云环境下的资源调度问题,提出一种能改善任务并行性与兼顾任务串行关系的调度模型,将用户提交的动态任务分割成具有制约关系的子任务,按运行次序放到具有不同优先级的调度队列中。针对同一调度队列中的子任务,采用基于最短任务延迟时间的改进蚁群算法(DSFACO)进行调度,在兼顾调度公平性与效率的前提下,最大化缩短任务延迟时间,从而提高用户满意度。实验结果表明,与任务调度增强蚁群算法相比,DSFACO算法在任务延迟时间、调度公平性及效率方面性能更好,能实现云计算环境下任务的最优调度。  相似文献   

4.
进程调度作为Linux操作系统的核心,使系统其他部分联合成一个整体,在资源分配与系统推进方面发挥着至关重要的作用。对CFS(completely fair scheduler)调度的算法及其过程进行研究,并结合组调度思想提出一种动态分配的CFS组调度策略。该调度算法以组负载作为CPU资源分配的主要标准,同时兼顾组间的公平性。根据这一原则,在原Linux内核基础上进行修改,实现了该过程。测试结果表明,该调度方式有效地处理了组在不同负载下,CPU资源的分配问题,扩展了组调度的实用性。  相似文献   

5.
针对云计算环境下资源调度模型未充分考虑资源评价的问题,为更好适应不同节点计算性能和大规模数据环境的处理需求,提出了一种基于多维评价模型的虚拟机资源调度策略;首先,在云计算环境下建立包括网络性能在内的多维资源评价模型,在此基础上提出一种改进的蚁群优化算法实现资源调度策略;然后在云计算仿真平台CloudSim上进行实现。实验结果表明,该算法可以更好适应不同网络性能的计算环境,显著提高了资源调度的性能,同时降低了虚拟机负载均衡离差,满足了云计算环境下的虚拟机资源负载均衡需求。  相似文献   

6.
针对云计算中现有调度算法为追求最短完成时间而不能很好兼顾负载平衡的问题,提出基于预先分类的Min-Min调度算法,该算法先利用能衡量资源计算和通信能力的属性信息对资源进行划分等级,再求出每个调度任务在资源中的最小执行时间,计算任务对应资源等级与最小执行时间的乘积,使用该乘积最小的任务-资源对进行调度.解决了原始Min-Min调度算法负载不均衡的问题,兼顾了执行时间最小和负载均衡.模拟的云仿真系统实验结果表明,该算法在平均任务响应时间、平均任务执行速度下降比和系统利用率等方面优于原始的Min-Min调度算法.  相似文献   

7.
边缘计算有高实时性和大数据交互处理的需求,边缘异构节点间的调度时耗长、通信时延高以及负载不均衡是影响边缘计算性能的核心问题,传统的云计算平台难以满足新的要求。文中研究了在边缘计算环境下Storm边缘节点的调度优化方法,建立了面向边缘计算的Storm任务卸载调度模型。针对拓扑任务在边缘异构节点间的实时动态分配问题,提出了一种启发式动态规划算法(Inspire Dynamic Programming,IDP),通过改变Storm的Task实例的排序分配方式以及Task实例和Slot任务槽的映射关系实现全局的优化调度;同时,针对拓扑任务的并发度受限于JVM栈深度的缺陷,提出了一种基于蝙蝠算法的调度策略。实验结果表明,与Storm调度算法相比,所提算法在边缘节点CPU利用率指标上平均提升了约60%,在集群的吞吐量指标上平均提升了约8.2%,因此能够满足边缘节点之间的高实时性处理要求。  相似文献   

8.
云计算中负载优化模型及算法研究   总被引:1,自引:0,他引:1  
云计算环境的动态性和异构性,使得云计算很容易出现负载失衡现象,严重影响了云计算的整体性能和用户体验.论文提出了基于改进遗传算法的负载均衡优化模型,兼顾资源需求动态变化和虚拟机的计算能力,建立相应的资源调度模型,运用改进遗传算法实现资源负载均衡.验证表明,该算法能很好满足云环境下数据中心的使用要求,提高资源利用率和负载均衡度.  相似文献   

9.
在云计算环境中,MapReduce集群已成为强大的大规模数据集处理平台。针对其在任务调度过程中存在用户QoS、集群资源利用率等方面的缺陷,提出了一种基于蚁群优化算法的调度策略(ACO-SS)。该调度策略同时考虑了优先级计算模型和任务调度过程,能有效地满足用户QoS,平衡集群节点负载,使分布在节点上的任务利用资源更加合理,提高了系统的调度性能。最后,通过CloudSim仿真实验表明,该调度策略在作业完成总体时间﹑资源利用率等重要指标上都具有明显优势。  相似文献   

10.
实时系统调度算法的优化设计   总被引:1,自引:1,他引:1  
文章介绍了Linux操作系统实时调度算法的简化模型,并提出了一种优化改进调度算法。该算法以进程的重要性为基础,兼顾截止期内完成进程的紧迫程度,建立了进程的优先级队列。算法可通过双链表来实现。对比实验结果表明,优化后的算法与优化前相比,特别是CPU正常负载时,可以实现更高的价值完成率和进程完成率,从而有效地提高了操作系统的实时性能。  相似文献   

11.
Cloud computing allows execution and deployment of different types of applications such as interactive databases or web-based services which require distinctive types of resources. These applications lease cloud resources for a considerably long period and usually occupy various resources to maintain a high quality of service (QoS) factor. On the other hand, general big data batch processing workloads are less QoS-sensitive and require massively parallel cloud resources for short period. Despite the elasticity feature of cloud computing, fine-scale characteristics of cloud-based applications may cause temporal low resource utilization in the cloud computing systems, while process-intensive highly utilized workload suffers from performance issues. Therefore, ability of utilization efficient scheduling of heterogeneous workload is one challenging issue for cloud owners. In this paper, addressing the heterogeneity issue impact on low utilization of cloud computing system, conjunct resource allocation scheme of cloud applications and processing jobs is presented to enhance the cloud utilization. The main idea behind this paper is to apply processing jobs and cloud applications jointly in a preemptive way. However, utilization efficient resource allocation requires exact modeling of workloads. So, first, a novel methodology to model the processing jobs and other cloud applications is proposed. Such jobs are modeled as a collection of parallel and sequential tasks in a Markovian process. This enables us to analyze and calculate the efficient resources required to serve the tasks. The next step makes use of the proposed model to develop a preemptive scheduling algorithm for the processing jobs in order to improve resource utilization and its associated costs in the cloud computing system. Accordingly, a preemption-based resource allocation architecture is proposed to effectively and efficiently utilize the idle reserved resources for the processing jobs in the cloud paradigms. Then, performance metrics such as service time for the processing jobs are investigated. The accuracy of the proposed analytical model and scheduling analysis is verified through simulations and experimental results. The simulation and experimental results also shed light on the achievable QoS level for the preemptively allocated processing jobs.  相似文献   

12.
Allocating submeshes to jobs in mesh-connected multicomputers in a FCFS fashion can lead to poor system performance (e.g., long job waiting delays) because the job at the head of the waiting queue can prevent the allocation of free submeshes to other waiting jobs with smaller submesh requirements. However, serving jobs aggressively out-of-order can lead to excessive waiting delays for jobs with large allocation requests. In this paper, we propose a scheduling scheme that uses a window of consecutive jobs from which it selects jobs for allocation and execution. This window starts with the current oldest waiting job and corresponds to the lookahead of the scheduler. The performance of the proposed window-based scheme has been compared to that of FCFS and other previous job scheduling schemes. Extensive simulation results based on synthetic workloads and real workload traces indicate that the new scheduling strategy exhibits good performance when the scheduling window size is large. In particular, it is substantially superior to FCFS in terms of system utilization, average job turnaround times, and maximum waiting delays under medium to heavy system loads. Also, it is superior to aggressive out-of-order scheduling in terms of maximum job waiting delays. Window-based job scheduling can improve both overall system performance and fairness (i.e., maximum job waiting delays) by adopting large lookahead job scheduling windows.  相似文献   

13.
袁野  晏立 《计算机工程》2012,38(12):287-290
在多处理器实时调度过程中,干涉上界的取值对于可调度性判定的性能具有较大影响。为此,针对实时系统的最早截止期优先调度算法,引入任务松弛的有关概念,提出一种基于负载计算的可调度性判定方法。通过减小问题区间内带入作业的工作负载取值,增加任务集通过可调度性判定的可能。实验结果表明,随着处理器数量的增加,该判定方法较传统方法有5%~10%的性能提升。  相似文献   

14.
在商业网格和云计算环境中,作业有到达时间、计算量、预算、截止期等参数,其中,预算是时间的函数。准确区分作业的重要性和紧迫性是作业调度系统的一个关键问题。综合利用这四个参数来定义作业的优先级,并提出基于价值密度和相对截止期的网格作业调度算法。分别对弱实时和强实时网格作业的调度进行仿真。仿真结果显示,所提出的调度算法的性能在两种情况下都优于所有对比算法的性能,且在强实时作业情况下优势更明显。  相似文献   

15.
In this paper, a heuristic dynamic scheduling scheme for parallel real-time jobs executing on a heterogeneous cluster is presented. In our system model, parallel real-time jobs, which are modeled by directed acyclic graphs, arrive at a heterogeneous cluster following a Poisson process. A job is said to be feasible if all its tasks meet their respective deadlines. The scheduling algorithm proposed in this paper takes reliability measures into account, thereby enhancing the reliability of heterogeneous clusters without any additional hardware cost. To make scheduling results more realistic and precise, we incorporate scheduling and dispatching times into the proposed scheduling approach. An admission control mechanism is in place so that parallel real-time jobs whose deadlines cannot be guaranteed are rejected by the system. For experimental performance study, we have considered a real world application as well as synthetic workloads. Simulation results show that compared with existing scheduling algorithms in the literature, our scheduling algorithm reduces reliability cost by up to 71.4% (with an average of 63.7%) while improving schedulability over a spectrum of workload and system parameters. Furthermore, results suggest that shortening scheduling times leads to a higher guarantee ratio. Hence, if parallel scheduling algorithms are applied to shorten scheduling times, the performance of heterogeneous clusters will be further enhanced.  相似文献   

16.
The paper presents a performance case study of parallel jobs executing in real multi user workloads. The study is based on a measurement based model capable of predicting the completion time distribution of the jobs executing under real workloads. The model constructed is also capable of predicting the effects of system design changes on application performance. The model is a finite state, discrete time Markov model with rewards and costs associated with each state. The Markov states are defined from real measurements and represent system/workload states in which the machine has operated. The paper places special emphasis on choosing the correct number of states to represent the workload measured. Specifically, the performance of computationally bound, parallel applications executing in real workloads on an Alliant FX/80 is evaluated. The constructed model is used to evaluate scheduling policies, the performance effects of multiprogramming overhead, and the scalability of the Alliant FX/8O in real workloads. The model identifies a number of available scheduling policies which would improve the response time of parallel jobs. In addition, the model predicts that doubling the number of processors in the current configuration would only improve response time for a typical parallel application by 25%. The model recommends a different processor configuration to more fully utilize extra processors. The paper also presents empirical results which validate the model created  相似文献   

17.
In an enterprise grid computing environments, users have access to multiple resources that may be distributed geographically. Thus, resource allocation and scheduling is a fundamental issue in achieving high performance on enterprise grid computing. Most of current job scheduling systems for enterprise grid computing provide batch queuing support and focused solely on the allocation of processors to jobs. However, since I/O is also a critical resource for many jobs, the allocation of processor and I/O resources must be coordinated to allow the system to operate most effectively. To this end, we present a hierarchical scheduling policy paying special attention to I/O and service-demands of parallel jobs in homogeneous and heterogeneous systems with background workload. The performance of the proposed scheduling policy is studied under various system and workload parameters through simulation. We also compare performance of the proposed policy with a static space–time sharing policy. The results show that the proposed policy performs substantially better than the static space–time sharing policy.  相似文献   

18.
为评估计算网格上并行作业的无中心式调度,设计了基于离散事件的模拟系统J3S.离散事件包括作业提交、开始和终止,作业状态可以是等待、运行或完成.事件引起作业状态的变化 ,并触发网格层次或本地层次调度.作业由网格工作负荷模型产生,该模型基于并行计算机工作负荷模型构建.网格层次调度通过网格调度器之间的协作完成.本地层次调度模拟改进的装填法,使作业可有不同的调度优先级.网格资源性能可有差异,作业执行性能依赖作业指派.模块化实现使J3S可模拟网格上并行作业的无中心式调度的各种场景.  相似文献   

19.
This paper presents a three-stage algorithm for resource-aware scheduling of computational jobs in a large-scale heterogeneous data center. The algorithm aims to allocate job classes to machine configurations to attain an efficient mapping between job resource request profiles and machine resource capacity profiles. The first stage uses a queueing model that treats the system in an aggregated manner with pooled machines and jobs represented as a fluid flow. The latter two stages use combinatorial optimization techniques to solve a shorter-term, more accurate representation of the problem using the first-stage, long-term solution for heuristic guidance. In the second stage, jobs and machines are discretized. A linear programming model is used to obtain a solution to the discrete problem that maximizes the system capacity given a restriction on the job class and machine configuration pairings based on the solution of the first stage. The final stage is a scheduling policy that uses the solution from the second stage to guide the dispatching of arriving jobs to machines. We present experimental results of our algorithm on both Google workload trace data and generated data and show that it outperforms existing schedulers. These results illustrate the importance of considering heterogeneity of both job and machine configuration profiles in making effective scheduling decisions.  相似文献   

20.
Allocating submeshes to jobs in mesh-connected multicomputers in an FCFS fashion leads to poor system performance because a large job at the head of the waiting queue can prevent the allocation of free submeshes to other smaller waiting jobs. However, serving jobs aggressively out-of-order can lead to excessive waiting delays for large jobs located at the head of the waiting queue. In this paper, we show that the ability of the job scheduling algorithm to bypass the head of the waiting queue should increase with the load, and we propose a scheduling scheme that can bypass the waiting queue head in a load-dependent adaptive fashion. Also, giving priority to large jobs because they are more difficult to accommodate is investigated. The performance of the proposed scheme has been compared to that of FCFS, aggressive out-of-order scheduling, and other previous job scheduling schemes. Extensive simulation results based on synthetic workloads and real workload traces indicate that our scheduling strategy is a good strategy when both average and maximum job waiting delays are considered. In particular, it is substantially superior to FCFS in terms of mean turnaround times, and to aggressive out-of-order scheduling in terms of maximum waiting delays.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号