首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 140 毫秒
1.
安鑫  康安  夏近伟  李建华  陈田  任福继 《计算机应用》2005,40(10):3081-3087
异构多核处理器已成为现代嵌入式系统的主流解决方案,而好的在线映射或调度方法对其充分发挥高性能和低功耗的优势起着至关重要的作用。针对异构多核处理系统上的应用程序动态映射和调度问题,提出一种基于机器学习、能快速准确评估程序性能和程序行为阶段变化的检测技术来有效确定重映射时机从而最大化系统性能的映射和调度解决方案。该方案一方面通过合理选择处理核和程序运行时的静态和动态特征来有效感知异构处理所带来的计算能力和工作负载运行行为的差异,从而能够构建更加准确的预测模型;另一方面通过引入阶段检测来尽可能减少在线映射计算的次数,从而能够提供更加高效的调度方案。最后,在SPLASH-2数据集上验证了所提出调度方案的有效性。实验结果表明,与Linux默认的完全公平调度(CFS)方法相比,所提出的方法在系统计算性能方面提高了52%,在CPU资源利用率上提高了9.4%。这表明所提方法在系统计算性能和CPU资源利用率方面具备优良的性能,可以有效提升异构多核系统的应用动态映射和调度效果。  相似文献   

2.
基于机器学习的异构多核处理器系统在线映射方法   总被引:1,自引:0,他引:1  
安鑫  张影  康安  陈田  李建华 《计算机应用》2019,39(6):1753-1759
异构多核处理器(HMPs)平台已成为现代嵌入式系统的主流解决方案,其中在线映射或调度对充分发挥其高性能和低功耗的优势起着至关重要的作用。针对HMPs的应用任务动态映射问题,提出了一种基于机器学习预测模型的在线映射调度解决方案。一方面,构建了一个可以快速高效地预测和评估不同映射方案性能的机器学习模型,为在线调度提供支持;另一方面,将该机器学习模型整合到遗传算法中以高效地找到(接近)最优的资源分配方案。最后,通过一个M-JPEG解码器验证了所提方法的有效性。实验结果表明,该方法的平均执行时间相较于常见的轮询调度和抽样调度方法分别降低了28%和19%左右。  相似文献   

3.
异构多核平台通过集成不同类型的处理核来为系统设计提供灵活性,从而使应用程序可以根据自身需求动态地选择不同类型的处理核来进行处理,实现应用程序的高效运行。随着半导体技术的发展,单芯片上集成的核心数量随之增加,使得现代多核处理器具有更高的功率密度,而这会导致芯片温度的升高,最终会对系统性能造成一定的负面影响。为了充分发挥出异构多核处理系统的性能优势,提出一种在满足温度安全功率的前提下,以最大化系统性能为目标的动态映射方法。该方法考虑异构多核系统的两种异构指标来确定映射方案:第一种异构指标是核心类型,不同类型的处理核具有不同的特征,因而它们适用于处理不同的应用程序;第二种异构指标是热感受性,芯片上不同的处理核位置具有不同的热感受性,越是中心位置的处理核受到的来自于其他处理核的热传递越多,因而温度也就越高。为此,提出一种基于神经网络性能预测器来对线程与处理核类型进行匹配,并利用热安全功率(TSP)模型将经过匹配后的线程映射到芯片上的具体位置。实验结果表明,所提出的方法与常见的轮询调度(RRS)相比,能在保证热安全约束的前提下将平均每个时钟周期内程序所执行的指令数,即指令/周期(IPC)提高53%左右。  相似文献   

4.
为有效提高异构的CPU/GPU集群计算性能,提出一种支持异构集群的CPU与GPU协同计算的两级动态调度算法。根据各节点计算能力评测结果和任务请求动态分发数据,在节点内CPU和GPU之间动态调度任务,使用数据缓存和数据处理双队列机制,提高异构集群的传输和处理效率。该算法实现了集群各节点“能者多劳”,避免了单节点性能瓶颈造成的任务长尾现象。实验结果表明,该算法较传统MPI/GPU并行计算性能提高了11倍。  相似文献   

5.
CPU-GPU异构多核系统对计算密集型的应用加速效果显著而得到广泛应用,但该系统易出现负载均衡问题。针对此问题,本文提出了一种CPU-GPU异构多核系统的动态任务调度算法。该算法充分利用CPU的线程资源和GPU的计算资源,准确测量CPU和GPU的计算能力,从而动态调整分配到CPU和GPU上的数据块大小,减小负载的总执行时间,提高系统加速比。实验结果表明,该算法使得系统加速比提高34%~103%。  相似文献   

6.
在异构资源环境中高效利用计算资源是提升任务效率和集群利用率的关键。Kuberentes作为容器编排领域的首选方案,在异构资源调度场景下调度器缺少GPU细粒度信息无法满足用户自定义需求,并且CPU/GPU节点混合部署下调度器无法感知异构资源从而导致资源竞争。综合考虑异构资源在节点上的分布及其硬件状态,提出一种基于Kubernetes的CPU/GPU异构资源细粒度调度策略。利用设备插件机制收集每个节点上GPU的详细信息,并将GPU资源指标提交给调度算法。在原有CPU和内存过滤算法的基础上,增加自定义GPU信息的过滤,从而筛选出符合用户细粒度需求的节点。针对CPU/GPU节点混合部署的情况,改进调度器的打分算法,动态感知应用类型,对CPU和GPU应用分别采用负载均衡算法和最小最合适算法,保证异构资源调度策略对不同类型应用的正确调度,并且在CPU资源不足的情况下充分利用GPU节点的碎片资源。通过对GPU细粒度调度和CPU/GPU节点混合部署情况下的调度效果进行实验验证,结果表明该策略能够有效进行GPU调度并且避免资源竞争。  相似文献   

7.
CPU/GPU异构系统具有很大的发展潜力,深入研究CPU/GPU异构平台的并行优化,可实现系统整体计算能力的最大化。通过对CPU/GPU任务划分的优化来平衡CPU和GPU的负载,可提高计算资源的利用率,缩短计算任务的执行时间;通过对GPU线程划分的优化,可使GPU获得更高的速度。从而提高系统整体性能。  相似文献   

8.
动态异构多核处理器的处理器核可动态调整的特征给操作系统调度算法带来了新的机遇和挑战.利用处理器核动态可调整的特征能更好地适应不同任务的运行需求,带来巨大的性能优化空间.然而也带来新的代价和更复杂的公平性的计算.为了解决面向动态异构多核处理器结构上的公平性调度问题,提出了一个基于集中式运行队列的调度模型,以降低调度算法在动态处理器核变化所带来的维护开销.并重新思考在动态异构处理器结构下公平性的定义,基于原有CFS调度算法提出新的HFS调度算法.HFS调度算法不仅能简单而有效地利用动态异构多核处理器的性能优势,而且能提供在动态异构多核处理器上的公平性调度.通过模拟SCMP,ACMP,DHCMP平台,证明了提出的HFS调度算法能够很好地发挥DHCMP结构的性能特征,比运行目前主流调度算法的SCMP和ACMP结构提升10.55%的用户级性能(ANTT),14.24%的系统吞吐率(WSU).  相似文献   

9.
数据流编程语言简化了相关领域的编程,很好地把任务计算和数据通信分开,从而使应用程序分别在任务级和数据级均具有可并行性。针对GPU/CPU混合架构中存在的大量数据并行、任务并行和流水线并行等问题,提出并实现了面向GPU/CPU混合架构的数据流程序任务划分方法和多粒度调度策略,包括任务的分类处理、GPU端任务的水平分裂和CPU端离散任务的均衡化,构造了软件流水调度,经过编译优化生成OpenCL的目标代码。任务的分类处理根据数据流程序各个任务的计算特点和任务间的通信量大小,将各任务分配到合适的计算平台上;GPU端任务的水平分裂利用GPU端任务的并行性将其均衡分裂到各个GPU,以避免GPU间高额的通信开销影响程序整体的执行性能;CPU端离散任务的均衡化通过选择合适CPU核,将CPU端各任务均衡分配给各CPU核,以保证负载均衡并提高各CPU核的利用率。实验以多块NVIDIA Tesla C2050、多核CPU为混合架构平台,选取多媒体领域典型的算法作为测试程序,实验结果表明了划分方法和调度策略的有效性。  相似文献   

10.
嵌入式系统的节能问题是系统设计中的热点。在保证系统整体控制性能的前提下达到尽可能低的能耗,一直是嵌入式控制系统中能量管理的目标。由于诸多不确定性因素和负载的动态变化,嵌入式系统总是运行在不可预期的开放环境中。为了有效处理CPU负载变化及控制任务执行时间不确定等问题,提出了一种基于模糊逻辑的能量觉察反馈调度方法。反馈调度器动态调节CPU的处理速度,实现对CPU利用率的有效控制,从而提供了一种有效的能量管理机制。通过仿真实验与常规动态电压调整方法、传统方法以及理想情况进行比较,验证了该反馈调度方法的有效性。  相似文献   

11.
近年来,随着半导体技术的发展以及应用多样化的需求,异构多核处理器已被广泛应用于高性能嵌入式系统中。这类系统面临的一个主要挑战就是如何在运行时对系统的可用资源(包括处理核等)进行管理分配从而满足系统及其所运行应用在性能和功耗等方面的需求。然而,虽然目前一些主流的资源管理技术在性能和/或功耗优化等方面取得了良好表现,但却经常对所设计的资源管理部件缺乏严格的可靠性保证,因此提出了一种基于离散控制器合成(DCS)的方法来对异构多核系统的在线资源管理策略进行自动、可靠的设计,即将形式化的、能够自动构造管理控制部件的DCS应用到异构多核系统的在线资源管理部件设计中。该方法通过采用形式化模型来描述异构系统的运行行为(例如如何为应用分配处理核),并将在线资源管理问题转换为一个面向某个系统管理目标(例如最大化应用性能)的DCS问题。在此基础上,通过现有的DCS工具对提出的方法进行了示例演示和验证,并对所使用DCS方法的可扩展性进行了评估。  相似文献   

12.
Cloud computing allows execution and deployment of different types of applications such as interactive databases or web-based services which require distinctive types of resources. These applications lease cloud resources for a considerably long period and usually occupy various resources to maintain a high quality of service (QoS) factor. On the other hand, general big data batch processing workloads are less QoS-sensitive and require massively parallel cloud resources for short period. Despite the elasticity feature of cloud computing, fine-scale characteristics of cloud-based applications may cause temporal low resource utilization in the cloud computing systems, while process-intensive highly utilized workload suffers from performance issues. Therefore, ability of utilization efficient scheduling of heterogeneous workload is one challenging issue for cloud owners. In this paper, addressing the heterogeneity issue impact on low utilization of cloud computing system, conjunct resource allocation scheme of cloud applications and processing jobs is presented to enhance the cloud utilization. The main idea behind this paper is to apply processing jobs and cloud applications jointly in a preemptive way. However, utilization efficient resource allocation requires exact modeling of workloads. So, first, a novel methodology to model the processing jobs and other cloud applications is proposed. Such jobs are modeled as a collection of parallel and sequential tasks in a Markovian process. This enables us to analyze and calculate the efficient resources required to serve the tasks. The next step makes use of the proposed model to develop a preemptive scheduling algorithm for the processing jobs in order to improve resource utilization and its associated costs in the cloud computing system. Accordingly, a preemption-based resource allocation architecture is proposed to effectively and efficiently utilize the idle reserved resources for the processing jobs in the cloud paradigms. Then, performance metrics such as service time for the processing jobs are investigated. The accuracy of the proposed analytical model and scheduling analysis is verified through simulations and experimental results. The simulation and experimental results also shed light on the achievable QoS level for the preemptively allocated processing jobs.  相似文献   

13.
Grid computing is a newly developed technology for complex systems with large-scale resource sharing, wide-area communication, and multi-institutional collaboration. Grid scheduling is an important infrastructure in the grid computing environment. Most of the existing grids scheduling methods focus on maximizing processor utilization without taking grid load into consideration. This may lead to significant inefficiencies in performance such as large job queues and processing delays. In this paper, we propose a multiagent-based scheduling system for computational grids with a new approach. Agent technology is suitable for a computational grid because of the dynamic, heterogeneous, and autonomous nature of the grid. The main idea of the proposed system is a combination of a static scheduling using a fixed scheduling algorithm and a dynamic adjustment through the autonomous behavior of agents. The superiority of the proposed system, in reducing the load of the grid and minimizing the response time for executing user applications, is demonstrated by simulation experiments.  相似文献   

14.
Grid computing has emerged a new field, distinguished from conventional distributed computing. It focuses on large-scale resource sharing, innovative applications and in some cases, high performance orientation. The Grid serves as a comprehensive and complete system for organizations by which the maximum utilization of resources is achieved. The load balancing is a process which involves the resource management and an effective load distribution among the resources. Therefore, it is considered to be very important in Grid systems. For a Grid, a dynamic, distributed load balancing scheme provides deadline control for tasks. Due to the condition of deadline failure, developing, deploying, and executing long running applications over the grid remains a challenge. So, deadline failure recovery is an essential factor for Grid computing. In this paper, we propose a dynamic distributed load-balancing technique called “Enhanced GridSim with Load balancing based on Deadline Failure Recovery” (EGDFR) for computational Grids with heterogeneous resources. The proposed algorithm EGDFR is an improved version of the existing EGDC in which we perform load balancing by providing a scheduling system which includes the mechanism of recovery from deadline failure of the Gridlets. Extensive simulation experiments are conducted to quantify the performance of the proposed load-balancing strategy on the GridSim platform. Experiments have shown that the proposed system can considerably improve Grid performance in terms of total execution time, percentage gain in execution time, average response time, resubmitted time and throughput. The proposed load-balancing technique gives 7 % better performance than EGDC in case of constant number of resources, whereas in case of constant number of Gridlets, it gives 11 % better performance than EGDC.  相似文献   

15.
Computing systems should be designed to exploit parallelism in order to improve performance. In general, a GPU (Graphics Processing Unit) can provide more parallelism than a CPU (Central Processing Unit), resulting in the wide usage of heterogeneous computing systems that utilize both the CPU and the GPU together. In the heterogeneous computing systems, the efficiency of the scheduling scheme, which selects the device to execute the application between the CPU and the GPU, is one of the most critical factors in determining the performance. This paper proposes a dynamic scheduling scheme for the selection of the device between the CPU and the GPU to execute the application based on the estimated-execution-time information. The proposed scheduling scheme enables the selection between the CPU and the GPU to minimize the completion time, resulting in a better system performance, even though it requires the training period to collect the execution history. According to our simulations, the proposed estimated-execution-time scheduling can improve the utilization of the CPU and the GPU compared to existing scheduling schemes, resulting in reduced execution time and enhanced energy efficiency of heterogeneous computing systems.  相似文献   

16.
边缘计算有高实时性和大数据交互处理的需求,边缘异构节点间的调度时耗长、通信时延高以及负载不均衡是影响边缘计算性能的核心问题,传统的云计算平台难以满足新的要求。文中研究了在边缘计算环境下Storm边缘节点的调度优化方法,建立了面向边缘计算的Storm任务卸载调度模型。针对拓扑任务在边缘异构节点间的实时动态分配问题,提出了一种启发式动态规划算法(Inspire Dynamic Programming,IDP),通过改变Storm的Task实例的排序分配方式以及Task实例和Slot任务槽的映射关系实现全局的优化调度;同时,针对拓扑任务的并发度受限于JVM栈深度的缺陷,提出了一种基于蝙蝠算法的调度策略。实验结果表明,与Storm调度算法相比,所提算法在边缘节点CPU利用率指标上平均提升了约60%,在集群的吞吐量指标上平均提升了约8.2%,因此能够满足边缘节点之间的高实时性处理要求。  相似文献   

17.
元计算系统的一个可扩展层次型资源管理模型   总被引:8,自引:1,他引:8  
资源管理是元计算系统中的关键问题,为了支持大规模元计算应用,必须有效地解决资源联合分配问题。提出了一种可扩展层次型资源管理模型,在充分利用已有的批处理调度系统的基础上,通过核心资源管理器与局域资源调度系统的交互,为异构资源提供统一的接口,为解决资源的联合分配问题,提出了基于站点的层次型管理机制,可将应用程序的资源请求分布在相对紧密的资源集合上,不仅减小了元计算应用程序的整体通信开销,而且计算提供了资源预约服务。  相似文献   

18.
The synchronous model of computation is well suited for real-time systems, because it allows static analysis in order to find and guarantee their reaction times. Today’s multi-core systems are becoming the predominant computing platforms. Synchronous programs are typically compiled into single threaded code, which makes them unsuitable for exploiting parallelism of the multi-core platforms. Moreover, static timing analysis becomes highly intractable for multi-core systems. This article proposes a novel methodology that aims at finding the mapping and schedule of synchronous programs that guarantees, statically, reaction times when mapped onto a multi-core system consisting of two types of time-predictable cores. The proposed methodology combines design space exploration based on evolutionary algorithm and scheduling of parts of synchronous programs. It allows minimizing the resource usage in terms of number of cores by finding the mapping and schedule with the guaranteed reaction time for architectures with different number of cores. In particular, we: (a) transform a synchronous program written in synchronous SystemJ to a graph-based model represented with two types of computation nodes suitable for execution on two types of time-predictable cores, (b) perform mapping of computation nodes on a customizable multi-core platform using genetic operations, and (c) generate a resulting static schedule of computation nodes for each mapping as part of the design space exploration. The design flow, from program specification and node mapping to the design space exploration and multi-core scheduling is completely automated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号