首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
基于Backfilling调度算法的“扩履适足”改进算法   总被引:2,自引:0,他引:2       下载免费PDF全文
在众多的并行作业调度算法中,Backfilling通常被广泛认为是有效提高CPU利用率的一种算法。该算法是在FCFS算法的基础上,将队列中较小的作业回填(Backfill)到空闲 CPU,以提高CPU利用率。但是,当空闲CPU数量仍然无法满足Backfilling算法中小作业的回填要求时,系统仍有部分CPU闲置,因而也难以达到更好地提高CPU利用率的目的。 。对于共享内存体系结构的并行计算机系统,本文提出了基于Backfilling算法的“扩履适足”的改进算法。该算法以正在运行的作业的CPU利用率为依据,通过动态调整正在运行作业的CPU数,扩大可供回填(backfill)的CPU空间,使得Backfilling算法无法回填的作业得到运行,弥补了Backfilling算法的不足,大大提高了共享内存体系结构并
并行计算机系统的CPU利用率。  相似文献   

2.
This paper proposes an algorithm for scheduling Virtual Machines (VM) with energy saving strategies in the physical servers of cloud data centers. Energy saving strategy along with a solution for productive resource utilization for VM deployment in cloud data centers is modeled by a combination of “Virtual Machine Scheduling using Bayes Theorem” algorithm (VMSBT) and Virtual Machine Migration (VMMIG) algorithm. It is shown that the overall data center’s consumption of energy is minimized with a combination of VMSBT algorithm and Virtual Machine Migration (VMMIG) algorithm. Virtual machine migration between the active physical servers in the data center is carried out at periodical intervals as and when a physical server is identified to be under-utilized. In VM scheduling, the optimal data centers are clustered using Bayes Theorem and VMs are scheduled to appropriate data center using the selection policy that identifies the cluster with lesser energy consumption. Clustering using Bayes rule minimizes the number of server choices for the selection policy. Application of Bayes theorem in clustering has enabled the proposed VMSBT algorithm to schedule the virtual machines on to the physical server with minimal execution time. The proposed algorithm is compared with other energy aware VM allocations algorithms viz. “Ant-Colony” optimization-based (ACO) allocation scheme and “min-min” scheduling algorithm. The experimental simulation results prove that the proposed combination of ‘VMSBT’ and ‘VMMIG’ algorithm outperforms other two strategies and is highly effective in scheduling VMs with reduced energy consumption by utilizing the existing resources productively and by minimizing the number of active servers at any given point of time.  相似文献   

3.
杨翎  姜春茂 《计算机应用》2021,41(4):990-998
虚拟机迁移技术作为云计算中降低数据中心能耗的重要手段被广泛应用。结合三支决策的分、治、效模型提出一种基于三支决策的虚拟机迁移调度策略(TWD-VMM)。首先,通过建立层次阈值树搜索所有可能取到的阈值,由此以数据中心能耗为优化目标得到总能耗最低的一对阈值,从而实现三分区域,即高负载区域、中负载区域和低负载区域。其次,针对不同负载的主机采取不同的迁移策略:对于高负载主机,以主机预迁出后的多维资源均衡度和主机负载下降幅度为目标;对于低负载主机,主要考虑主机预放置后的多维资源均衡度;对于中等负载主机,如果迁移过来的虚拟机依旧满足中负载特性,则可以接受迁入。实验采用CloudSim模拟器进行,将TWD-VMM算法分别与基于阈值调度算法(TVMS)、基于虚拟机迁移节能调度算法(EEVS)、云计算中心节能调度算法(REVMS)算法在主机负载、主机多维资源利用均衡度、数据中心总能耗等方面进行比较,结果表明TWD-VMM算法在提高主机资源利用率、均衡主机负载等方面有明显效果,且能耗平均降低了27%。  相似文献   

4.
Server Consolidation is one of the foremost concerns associated with the effective management of a Cloud Data Center as it has the potential to accomplish significant reduction in the overall cost and energy consumption. Most of the existing works on Server Consolidation have focused only on reducing the number of active physical servers (PMs) using Virtual Machine (VM) Live Migration. But, along with reducing the number of active PMs, if a consolidation approach reduces residual resource fragmentation, the residual resources can be efficiently used for new VM allocations, or VM reallocations, and some future migrations can also be reduced. None of the existing works have explicitly focused on reducing residual resource fragmentation along with reducing the number of active PMs to the best of our knowledge. We propose RFAware Server Consolidation, a heuristics based server consolidation approach which performs residual resource defragmentation along with reducing the number of active PMs in cloud data centers.  相似文献   

5.
Cloud computing allows execution and deployment of different types of applications such as interactive databases or web-based services which require distinctive types of resources. These applications lease cloud resources for a considerably long period and usually occupy various resources to maintain a high quality of service (QoS) factor. On the other hand, general big data batch processing workloads are less QoS-sensitive and require massively parallel cloud resources for short period. Despite the elasticity feature of cloud computing, fine-scale characteristics of cloud-based applications may cause temporal low resource utilization in the cloud computing systems, while process-intensive highly utilized workload suffers from performance issues. Therefore, ability of utilization efficient scheduling of heterogeneous workload is one challenging issue for cloud owners. In this paper, addressing the heterogeneity issue impact on low utilization of cloud computing system, conjunct resource allocation scheme of cloud applications and processing jobs is presented to enhance the cloud utilization. The main idea behind this paper is to apply processing jobs and cloud applications jointly in a preemptive way. However, utilization efficient resource allocation requires exact modeling of workloads. So, first, a novel methodology to model the processing jobs and other cloud applications is proposed. Such jobs are modeled as a collection of parallel and sequential tasks in a Markovian process. This enables us to analyze and calculate the efficient resources required to serve the tasks. The next step makes use of the proposed model to develop a preemptive scheduling algorithm for the processing jobs in order to improve resource utilization and its associated costs in the cloud computing system. Accordingly, a preemption-based resource allocation architecture is proposed to effectively and efficiently utilize the idle reserved resources for the processing jobs in the cloud paradigms. Then, performance metrics such as service time for the processing jobs are investigated. The accuracy of the proposed analytical model and scheduling analysis is verified through simulations and experimental results. The simulation and experimental results also shed light on the achievable QoS level for the preemptively allocated processing jobs.  相似文献   

6.
随着云计算技术和分布式业务的发展,数据中心内部“东西向”大象流量激增,这部分大象流在调度不当的情况下容易发生碰撞,造成链路拥塞。本文提出一种基于软件定义网络(SDN)的动态优先级多路径调度算法(DPMS)。该算法根据数据中心流量的特点制定大象流和老鼠流调度模型,充分利用各网络节点间的冗余链路,提高资源利用率;并结合组表优化SDN架构中控制器和交换机的通信模式,降低了数据包处理时延。实验结果表明,相比ECMP和Hedera这2种调度策略,DPMS提高了网络吞吐量和链路利用率,减少了平均流完成时间,网络的整体性能有所提高。  相似文献   

7.
Cloud computing is an emerging technology in a distributed environment with a collection of large-scale heterogeneous systems. One of the challenging issues in the cloud data center is to select the minimum number of virtual machine (VM) instances to execute the tasks of a workflow within a time limit. The objectives of such a strategy are to minimize the total execution time of a workflow and improve resource utilization. However, the existing algorithms do not guarantee to achieve high resource utilization although they have abilities to achieve high execution efficiency. The higher resource utilization depends on the reusability of VM instances. In this work, we propose a new intelligent water drops based workflow scheduling algorithm for Infrastructure-as-a-Service (IaaS) cloud. The objectives of the proposed algorithm are to achieve higher resource utilization and minimize the makespan within the given deadline and budget constraints. The first contribution of the algorithm is to find multiple partial critical paths (PCPs) of a workflow which helps in finding suitable VM instances. The second contribution is a scheduling strategy for PCP-VM assignment for assigning the VM instances. The proposed algorithm is evaluated through various simulation runs using synthetic datasets and various performance metrics. Through comparison, we show the superior performance of the proposed algorithm over the existing ones.  相似文献   

8.
Scheduling jobs on the IBM SP2 system and many other distributed-memory MPPs is usually done by giving each job a partition of the machine for its exclusive use. Allocating such partitions in the order in which the jobs arrive (FCFS scheduling) is fair and predictable, but suffers from severe fragmentation, leading to low utilization. This situation led to the development of the EASY scheduler which uses aggressive backfilling: Small jobs are moved ahead to fill in holes in the schedule, provided they do not delay the first job in the queue. We compare this approach with a more conservative approach in which small jobs move ahead only if they do not delay any job in the queue and show that the relative performance of the two schemes depends on the workload. For workloads typical on SP2 systems, the aggressive approach is indeed better, but, for other workloads, both algorithms are similar. In addition, we study the sensitivity of backfilling to the accuracy of the runtime estimates provided by the users and find a very surprising result. Backfilling actually works better when users overestimate the runtime by a substantial factor  相似文献   

9.
Scheduling of tasks in cloud computing is an NP-hard optimization problem. Load balancing of non-preemptive independent tasks on virtual machines (VMs) is an important aspect of task scheduling in clouds. Whenever certain VMs are overloaded and remaining VMs are under loaded with tasks for processing, the load has to be balanced to achieve optimal machine utilization. In this paper, we propose an algorithm named honey bee behavior inspired load balancing (HBB-LB), which aims to achieve well balanced load across virtual machines for maximizing the throughput. The proposed algorithm also balances the priorities of tasks on the machines in such a way that the amount of waiting time of the tasks in the queue is minimal. We have compared the proposed algorithm with existing load balancing and scheduling algorithms. The experimental results show that the algorithm is effective when compared with existing algorithms. Our approach illustrates that there is a significant improvement in average execution time and reduction in waiting time of tasks on queue.  相似文献   

10.
Cloud computing distributes task-parallel among the various resources. Applications with self-service supported and on-demand service have rapid growth. For these applications, cloud computing allocates the resources dynamically via the internet according to user requirements. Proper resource allocation is vital for fulfilling user requirements. In contrast, improper resource allocations result to load imbalance, which leads to severe service issues. The cloud resources implement internet-connected devices using the protocols for storing, communicating, and computations. The extensive needs and lack of optimal resource allocating scheme make cloud computing more complex. This paper proposes an NMDS (Network Manager based Dynamic Scheduling) for achieving a prominent resource allocation scheme for the users. The proposed system mainly focuses on dimensionality problems, where the conventional methods fail to address them. The proposed system introduced three –threshold mode of task based on its size STT, MTT, LTT (small, medium, large task thresholding). Along with it, task merging enables minimum energy consumption and response time. The proposed NMDS is compared with the existing Energy-efficient Dynamic Scheduling scheme (EDS) and Decentralized Virtual Machine Migration (DVM). With a Network Manager-based Dynamic Scheduling, the proposed model achieves excellence in resource allocation compared to the other existing models. The obtained results shows the proposed system effectively allocate the resources and achieves about 94% of energy efficient than the other models. The evaluation metrics taken for comparison are energy consumption, mean response time, percentage of resource utilization, and migration.  相似文献   

11.
Optimal task allocation in Large-Scale Computing Systems (LSCSs) that endeavors to balance the load across limited computing resources is considered an NP-hard problem. MinMin algorithm is one of the most widely used heuristic for scheduling tasks on limited computing resources. The MinMin minimizes makespan compared to other algorithms, such as Heterogeneous Earliest Finish Time (HEFT), duplication based algorithms, and clustering algorithms. However, MinMin results in unbalanced utilization of resources especially when majority of tasks have lower computational requirements. In this work we consider a computational model where each machine has certain bounded capacity to execute a predefined number of tasks simultaneously. Based on aforementioned model, a task scheduling heuristic Extended High to Low Load (ExH2LL) is proposed that attempts to balance the workload across the available computing resources while improving the resource utilization and reducing the makespan. ExH2LL dynamically identifies task-to-machine assignment considering the existing load on all machines. We compare ExH2LL with MinMin, H2LL, Improved MinMin Task Scheduling (IMMTS), Load Balanced MaxMin (LBM), and M-Level Suffrage-Based Scheduling Algorithm (MSSA). Simulation results show that ExH2LL outperforms the compared heuristics with respect to makespan and resource utilization. Moreover, we formally model and verify the working of ExH2LL using High Level Petri Nets, Satisfiability Modulo Theories Library, and Z3 Solver.  相似文献   

12.
印乐  黄磊 《软件学报》2013,24(10):2289-2299
并行发生(may happen in parallel,简称MHP)分析计算并行程序中哪些语句可以并行执行,它是并行分析技术的重要组成部分.提出一种针对Java 程序的新颖的MHP分析算法.与已有算法相比,新算法抛弃了“子线程只会被父线程等待同步”的假设,以非耦合的方式分别处理start 同步和join 同步;新算法的处理逻辑虽然更加简单,但却更加完备;在计算控制信息时,新算法不必像已有算法那样通过内联构造全局的控制流图,显著地提高了算法的扩展性.新的MHP 算法被用来过滤静态数据竞争检测中虚假的数据竞争.在14 个Java 测试程序上的实验结果表明,新的MHP 算法计算控制信息的开销远远小于已有算法.  相似文献   

13.
于鸽  冯山 《计算机应用》2016,36(6):1645-1649
针对保证实时数据对象时序一致性调度算法在软实时数据库系统环境下的应用问题,提出了一种基于概率统计的可延迟优化(SDS-OPT)算法。首先,分析和比较了现有算法在可调度性、服务质量(QoS)以及工作负载方面的特征与不足,指出优化现有算法的必要性;然后,利用最速下降法提升作业的执行时间筛选基准值,进而增加实时更新事务可调度的作业数量,以确保实时数据对象的时序一致性服务质量(QoS)最大化;最后,从工作负载和服务质量两个方面对所提算法和现有算法的性能进行对比分析。仿真实验结果表明,相对于已有的针对固定优先级可延迟调度算法(DS-FP)和统计性的非确定性可延迟调度算法(DS-PS),所提算法能够保证实时数据对象的时序一致性,同时降低工作负载,服务质量提升明显。  相似文献   

14.
Science is increasingly becoming more and more data-driven. The ability of a geographically distributed community of scientists to access and analyze large amounts of data has emerged as a significant requirement for furthering science. In data intensive computing environment with uncountable numeric nodes, resource is inevitably unreliable, which has a great effect on task execution and scheduling. Novel algorithms are needed to schedule the jobs on the trusty nodes to execute, assure the high speed of communication, reduce the jobs execution time, lower the ratio of failure execution, and improve the security of execution environment of important data. In this paper, a kind of trust mechanism-based task scheduling model was presented. Referring to the trust relationship models of social persons, trust relationship is built among computing nodes, and the trustworthiness of nodes is evaluated by utilizing the Bayesian cognitive method. Integrating the trustworthiness of nodes into a Dynamic Level Scheduling (DLS) algorithm, the Trust-Dynamic Level Scheduling (Trust-DLS) algorithm is proposed. Moreover, a benchmark is structured to span a range of data intensive computing characteristics for evaluation the proposed method. Theoretical analysis and simulations prove that the Trust-DLS algorithm can efficiently meet the requirement of data intensive workloads in trust, sacrificing fewer time costs, and assuring the execution of tasks in a security way in large-scale data intensive computing environment.  相似文献   

15.
In today’s world, Cloud Computing (CC) enables the users to access computing resources and services over cloud without any need to own the infrastructure. Cloud Computing is a concept in which a network of devices, located in remote locations, is integrated to perform operations like data collection, processing, data profiling and data storage. In this context, resource allocation and task scheduling are important processes which must be managed based on the requirements of a user. In order to allocate the resources effectively, hybrid cloud is employed since it is a capable solution to process large-scale consumer applications in a pay-by-use manner. Hence, the model is to be designed as a profit-driven framework to reduce cost and make span. With this motivation, the current research work develops a Cost-Effective Optimal Task Scheduling Model (CEOTS). A novel algorithm called Target-based Cost Derivation (TCD) model is used in the proposed work for hybrid clouds. Moreover, the algorithm works on the basis of multi-intentional task completion process with optimal resource allocation. The model was successfully simulated to validate its effectiveness based on factors such as processing time, make span and efficient utilization of virtual machines. The results infer that the proposed model outperformed the existing works and can be relied in future for real-time applications.  相似文献   

16.
Dynamic virtual machine (VM) consolidation is one of the emerging technologies that has been considered for low-cost computing in cloud data centers. Quality-of-service (QoS) assurance is one of the challenging issues in the VM consolidation problem since it is directly affected by the increase of resource utilization due to the consolidations. In this paper, we take advantage of Markov chain models to propose a novel approach for VM consolidation that can be used to explicitly set a desired level of QoS constraint in a data center to ensure the QoS goals while improving system utilization. For this purpose, an energy-efficient and QoS-aware best fit decreasing algorithm for VM placement is proposed, which considers QoS objective when determining the location of a migrating VM. This algorithm employs an online transition matrix estimator method to deal with the nonstationary nature of real workload data. We also propose new policies for detecting overloaded and underloaded hosts. The performance of our proposed algorithms is evaluated through simulations. The results show that the proposed VM consolidation algorithms in this paper outperforms the benchmark algorithms in terms of energy consumption, service-level agreement violations, and other cost factors.  相似文献   

17.
关联规则挖掘是数据挖掘的一个重要分支,但随着数据的快速增长,传统关联规则挖掘算法不能很好地适应大数据的要求,需要在分布式、并行计算的平台上寻找突破。Spark是专门为大数据处理而设计的一个适合迭代运算的并行计算模型,相比MapReduce具有更高效、充分利用内存、更适合迭代计算和交互式处理的优点。对已有的基于Spark的并行关联规则挖掘算法进行了分类和综述,并总结了各自的优缺点和适用范围,为下一步的研究提供参考。  相似文献   

18.
杨勇  蔡自兴  刘美琴 《计算机工程》2005,31(23):42-44,54
针对移动机器人导航控制中信息处理量大、任务多的情况,提出了一个适用于移动机器人的分布式计算框架,并在此框架的基础上设计了一种任务调度方法——GMBSA,该方法以资源代理为基础,首先对任务执行时间进行预测,然后运用遗传算法结合多队列Backfilling方法进行任务调度,达到最小化任务执行时间的要求,最终实现资源的优化分配,满足了机器人导航控制中的实时性要求。该文采用实验室构建的分布式计算环境对GMBSA的性能进行了测试,并比较了轻重负载情况下GMBSA,多队列Backfilling和FCFS 3种调度方案的性能差异。  相似文献   

19.
考虑网格资源异构、自治、动态等特性,讨论本地用户具有强占优先权情况下的任务调度问题,提出了TBBS(Time-Balancing Based Scheduling Algorithm)算法.建立调度优化模型,以期望完成时间最小为目标选择执行任务的最佳资源组合.以时间均衡策略将任务分解并调度到资源上执行,减少了子任务同步时因等待而产生的延时,获得较好的并行计算性能.采用重复调度策略,适应计算网格中资源的特性.  相似文献   

20.
Computing clusters (CC) consisting of several connected machines, could provide a high-performance, multiuser, timesharing environment for executing parallel and sequential jobs. In order to achieve good performance in such an environment, it is necessary to assign processes to machines in a manner that ensures efficient allocation of resources among the jobs. The paper presents opportunity cost algorithms for online assignment of jobs to machines in a CC. These algorithms are designed to improve the overall CPU utilization of the cluster and to reduce the I/O and the interprocess communication (IPC) overhead. Our approach is based on known theoretical results on competitive algorithms. The main contribution of the paper is how to adapt this theory into working algorithms that can assign jobs to machines in a manner that guarantees near-optimal utilization of the CPU resource for jobs that perform I/O and IPC operations. The developed algorithms are easy to implement. We tested the algorithms by means of simulations and executions in a real system and show that they outperform existing methods for process allocation that are based on ad hoc heuristics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号