首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 160 毫秒
1.
针对Hadoop默认调度算法和异构环境下LATE调度算法的不足,在SAMR调度算法的基础上提出了一种增强的自适应MapReduce调度算法。该算法记录了每个节点的历史信息,采用K-means聚类算法动态地调整阶段进度值以找到真正需要启动备份的落后任务。实验结果表明,增强自适应的MapReduce调度算法在提高任务执行时间的估算误差以及准确识别慢任务方面具有一定的有效性。  相似文献   

2.
为了解决当前Hadoop集群在异构资源环境下固有的调度分配方法的不足,提出了一种基于节点能力的自适应调度算法NCAS(node capacity adaptive scheduling)。首先,NCAS算法根据节点性能、任务特征计算得到调度因子;然后,由调度因子确定各节点应分得的数据量与任务槽数;最后,将数据和任务多分给快节点同时少分给慢节点。实验结果表明,与传统的调度算法相比,NCAS算法大幅度减少了备份任务的启动数量,明显减少了作业完成时间,提升了任务执行效率。  相似文献   

3.
针对网格环境下计算节点的自治性、异构性、分布性等特征,提出了一种动态的基于任务响应时间预测的调度算法。该调度方法依据历史数据和最近访问过计算节点的任务请求提交时间、任务完成时间、网络通信延迟等信息,预测计算节点将来的任务响应时间,将任务提交给轻负载或性能较优的计算节点完成。实验结果表明,该方法不但可以有效减少不必要的延迟,而且在任务响应时间、任务的吞吐率及任务在调度器内等待被调度的时间方面比随机调度等传统算法要优。  相似文献   

4.
针对异构集群任务推测式执行算法存在的任务进度比例固定、落后任务被动选取等问题,提出基于快慢节点集计算能力差异的自适应任务调度算法。该算法量化节点集计算能力差异实现分集调度,并通过节点与任务速率的动态反馈及时更新快慢节点集,提高节点集资源利用率与任务并行度。在两节点集中,利用动态调整任务进度比例判别落后任务,主动选择采用替代执行方式为落后任务执行备份任务的快节点,从而提升任务执行效率。与最长近似结束时间(LATE)算法的实验对比结果表明,该算法在短作业集、混合型作业集、出现节点性能下降的混合型作业集执行时间上比LATE算法分别缩短了5.21%、20.51%、23.86%,启用的备份任务数比LATE算法明显减少。所提算法可使任务主动适应节点差异,在减少备份任务的同时有效提高作业整体执行效率。  相似文献   

5.
针对异构环境下LATE算法在选择备份任务及执行节点时的不足,提出一个改进的IR-LATE调度算法。算法通过计算为剩余完成时间最长、最需要备份的慢任务启动备份,并将其按负载不同进行分类,结合轮询算法,将备份任务分配到负载最小且成功/负载比高的节点上执行。实验结果表明,该算法与LATE算法比较,有效的将作业完成时间缩短了30%左右,提高了执行效率,进而促进系统的负载均衡。  相似文献   

6.
针对固态硬盘( SSD)存储系统在高并发访问时出现的I/O路径局部拥塞效应,设计并实现了动态拥塞控制调度器。该调度器通过设置动态拥塞阈值,分化处理I/O请求,显著提高了固态硬盘存储系统的系统性能,有效改善了系统I/O路径的局部拥塞效应。实验结果表明,相对于内核NOOP调度器,该调度器使得系统平均响应时间减少20%左右,局部拥塞效应减少90%左右。  相似文献   

7.
随着基于Hadoop平台的大数据技术的不断发展和实践的深入,Hadoop YARN资源调度策略在异构集群中的不适用性越发明显。一方面,节点资源无法动态分配,导致优势节点的计算资源浪费、系统性能没有充分发挥;另一方面,现有的静态资源分配策略未考虑作业在不同执行阶段的差异,易产生大量资源碎片。基于以上问题,提出了一种负载自适应调度策略。监控集群执行节点和提交作业的性能信息,利用实时监控数据建模、量化节点的综合计算能力,结合节点和作业的性能信息在调度器上启动基于相似度评估的动态资源调度方案。优化后的系统能够有效识别集群节点的执行能力差异,并根据作业任务的实时需求进行细粒度的动态资源调度,在完善YARN现有调度语义的同时,可作为子级资源调度方案架构在上层调度器下。在Hadoop 2.0上实现并测试该策略,实验结果表明,作业的自适应资源调度策略显著提高了资源利用率,集群并发度提高了2到3倍,时间性能提升了近10%。  相似文献   

8.
针对现有Hadoop难以适应异构资源环境的不足,提出一种自适应MapReduce调度器:CloudMR.基于数据局部性,CloudMR将同一机架内的对进行本地归约合并,减少中间结果中对的数目,从而减少机架间的数据传送.根据资源性能和任务特征,CloudMR动态确定节点任务槽数和数据分配量.对于计算性能高的节点,CloudMR分配较多的任务和数据量,而对于计算性能低的节点,相应地减轻任务和数据量负载.实验表明,在异构环境下,较之现有Hadoop,CloudMR减少了节点间数据传输和备份任务运行,缩短了作业完成时间.  相似文献   

9.
研究对比Hadoop平台下默认的推测任务调度算法和异构环境下LATE调度算法的优势和不足,提出了一种基于Hadoop集群的改进的推测任务调度算法.该算法以节点历史信息对Reduce任务各阶段比例进行动态调整和更新,并对任务实时处理速率进行局部平滑处理来提高预估任务剩余完成时间的准确性,最后采用MCP模型对备份任务有效性进行验证.通过实验结果分析可知:该算法能够有效提升备份任务成功率,减少作业完成时间.  相似文献   

10.
提出了计算资源共享平台中具有时间约束的工作流任务调度方法,该方法利用了非集中式的树型应用层覆盖网络拓扑结构,从而可以高效而快速的收集资源的可用信息。采用全局调度器与本地调度器结合的方式,通过定义资源的收集功能过程,使每个节点中的本地调度器能够把自身的资源可用信息提供给全局的调度器,工作流中任务的最后期限时间约束和任务的恢复时间以一种时间间隙的机制来完成。仿真结果表明,分治模式和解方程类的迭代模式的工作流任务能够在平台上成功调度运行,具有比较快的响应时间和低的通信负载。  相似文献   

11.
同构Hadoop集群环境下改进的延迟调度算法   总被引:1,自引:1,他引:0  
在Hadoop框架下计算资源和数据资源可以在不同物理位置的特点产生本地化问题。延迟调度算法的产生旨在解决本地化问题, 此算法根据任务待处理数据的物理位置作为作业的计算节点, 调度任务至目标节点。但是可能出现同一作业中若干任务集中运行在某一计算节点, 导致作业达不到理想的并行效果。针对原有的延迟调度算法, 提出延迟一容量调度算法, 允许部分任务选择非本地化节点作为原延迟调度算法中任务的目标计算节点, 以提高作业的响应时间与增加作业的并行程度。最后通过实验对比分析, 改进后的算法在执行效率和并行效果明显优于原延迟调度算法。  相似文献   

12.
MapReduce已经成为主流的海量数据处理模式,任务调度作为其关键环节已受到业界广泛关注。针对已有的延迟调度算法存在的问题,即建立在任务都是短任务的理论假设有一定限制,当节点处理不同长度的任务时算法性能严重下降和基于静态的等待时间阈值不能适应不同用户的作业需求,提出了一种基于任务分类的延迟调度算法。该算法通过给不同长度的任务设置不同的等待时间阈值,以适应不同作业的响应需求。通过分析各动态参数,根据所建任务模型调整任务的等待时间阈值。仿真验证该算法在响应时间及负载均衡性方面优于已有的延迟调度算法。  相似文献   

13.
This article presents an efficient hardware architecture of EDF-based task scheduler, which is suitable for hard real-time systems due to the constant response time of the scheduler. The proposed scheduler contains a queue of ready tasks that is based on a new MIN/MAX queue architecture called Heap Queue, which is inspired by Shift Registers, Systolic Arrays, heapsort algorithm, the Rocket Queue architecture and dual-port RAMs. The instructions of the proposed scheduler have throughput of one instruction per two clock cycles regardless of the actual number of tasks managed by the scheduler, and regardless of the scheduler capacity. The developed task scheduler is optimized for low chip area costs, which leads to lower energy consumption. The Heap Queue-based architecture has constant time complexity due to two clock-cycle response time of the instructions and therefore, the architecture is highly deterministic. The scheduler supports CPUs that can execute 1, 2 or 4 tasks simultaneously, and contains an implementation of clever and efficient logic that can handle conflicts caused by the fact that the scheduler is used by all CPU cores at the same time. The proposed scheduler was verified through SystemVerilog UVM-like simulations that applied billions of randomly generated test instructions. Achieved ASIC (28 nm) and FPGA synthesis results are presented and compared. More than 86% of the chip area and 93% of the total power consumption can be saved if Heap Queue architecture is used in hardware implementations of EDF algorithm. Advantages and disadvantages of the proposed task scheduler are discussed through the comparison to the existing solutions.  相似文献   

14.
Scheduling large-scale application in heterogeneous grid systems is a fundamental NP-complete problem that is critical to obtain good performance and execution cost. To achieve high performance in a grid system it requires effective task partitioning, resource management and load balancing. The heterogeneous and dynamic nature of a grid, as well as the diverse demands of applications running on the grid, makes grid scheduling a major task. Existing schedulers in wide-area heterogeneous systems require a large amount of information about the application and the grid environment to produce reasonable schedules. However, this required information may not be available, may be too expensive to collect, or may increase the runtime overhead of the scheduler such that the scheduler is rendered ineffective. We believe that no one scheduler is appropriate for all grid systems and applications. This is because while data parallel applications in which further data partitioning is possible can be further improved by efficient management of resources, smart selection of resources and load balancing can be possible, in functional/not-dividable-task parallel applications such partitioning is either not possible or difficult or expensive in term of performance. In this paper, we propose a scheduler for data parallel applications (SDPA) which offers an efficient task partitioning and load balancing strategy for data parallel applications in grid environment. The proposed SDPA offers two major features: maintaining job priority even if insufficient number of free resources is available and pre-task assignment to cut the idle time of nodes. The SDPA selects nodes smartly according to the nature of task and the nodes’ resources availability. Simulation results conducted reveal that SDPA achieves performance improvement over reported strategies in the reviewed literature in terms of execution time, throughput and waiting time.  相似文献   

15.
针对目前交通拥堵致使出行打车难的问题,设计并实现一种基于云计算的手机智能出租车呼叫系统。该系统由云服务器和Android手机客户端组成,服务器利用云计算环境下的Map-Reduce并行编程模型对K-means聚类算法实施并行化,提高推送信息的质量和效率;客户端分别利用LocationClient、MapView和MKOfflineMap接口实现定位服务、图层展示更新和百度离线地图服务功能,通过Android智能手机平台为用户提供及时、准确的信息服务。在客户端和服务器之间,利用RPC服务推送Protocol Buffer协议序列化的信息。实验结果表明,与滴滴打车软件相比,该系统的搜索推荐效率提高了20%左右,离线地图展示及定位流量比在线方法减少了90%以上,快速响应性能较好。  相似文献   

16.
Computational grids have become an appealing research area as they solve compute-intensive problems within the scientific community and in industry. A grid computational power is aggregated from a huge set of distributed heterogeneous workers; hence, it is becoming a mainstream technology for large-scale distributed resource sharing and system integration. Unfortunately, current grid schedulers suffer from the haste problem, which is the schedule inability to successfully allocate all input tasks. Accordingly, some tasks fail to complete execution as they are allocated to unsuitable workers. Others may not start execution as suitable workers are previously allocated to other peers. This paper is the first to introduce the scheduling haste problem. It also presents a reliable grid scheduler. The proposed scheduler selects the most suitable worker to execute an input grid task using a fuzzy inference system. Hence, it minimizes the turnaround time for a set of grid tasks. Moreover, our scheduler is a system-oriented one as it avoids the scheduling haste problem. Experimental results have shown that the proposed scheduler outperforms traditional grid schedulers as it introduces a better scheduling efficiency.  相似文献   

17.
针对多任务操作系统的可重构资源管理,提出了一种管理模型和在线调度算法,具体实现了把任务分配给基于块划分的可重构器件。一方面,可重构器件由一个主CPU控制,主CPU运行在线调度器和放置器;另一方面,可重构器件由具有相同垂直尺寸的固定大小的块构成,但块可以有不同的宽度,目的是为了在资源和任务之间实现更好的匹配;同时在在线调度器和放置器运行两个函数fSPLIT和fSELECT来实现任务在可重构器件上的配置和调度。仿真结果表明,提出的资源管理模型和调度算法不仅能够实现任务集平均响应时间的最小化和有效调度,而且相比于其他调度算法,还能获得更高的资源利用率。  相似文献   

18.
李震  杜中军 《计算机工程》2012,38(11):27-29,37
Map-Reduce模型在分配输入文件时没有考虑集群中大量异构节点的计算性能,导致运行map任务时网络数据传送量增加。针对该问题,提出一种云计算环境下的改进型Map-Reduce模型。根据集群中大量节点计算性能不同的特点,采用最小化最大计算时间的目标函数进行建模,利用遗传算法求解该模型得到分配方案。仿真结果证明了该模型的有效性。  相似文献   

19.
Computational grids have become an appealing research area as they solve compute-intensive problems within the scientific community and in industry. A Grid computational power is aggregated from a huge set of distributed heterogeneous workers; hence, it is becoming a mainstream technology for large-scale distributed resource sharing and system integration. Unfortunately, current grid schedulers suffer from the haste problem, which is the schedule inability to successfully allocate all input tasks. Accordingly, some tasks fail to complete execution as they are allocated to unsuitable workers. Others may not start execution as suitable workers are previously allocated to other peers. This paper is the first to introduce the scheduling haste problem. It also presents a reliable grid scheduler. The proposed scheduler selects the most suitable worker to execute an input grid task using a fuzzy inference system. Hence, it minimizes the turnaround time for a set of grid tasks. Moreover, our scheduler is a system-oriented one as it avoids the scheduling haste problem. Experimental results have shown that the proposed scheduler outperforms traditional grid schedulers as it introduces a better scheduling efficiency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号