首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
We consider the problem of power and performance management for a multicore server processor in a cloud computing environment by optimal server configuration for a specific application environment. The motivation of the study is that such optimal virtual server configuration is important for dynamic resource provision in a cloud computing environment to optimize the power and performance tradeoff for certain specific type of applications. Our strategy is to treat a multicore server processor as an M/M/m queueing system with multiple servers. The system performance measures are the average task response time and the average power consumption. Two core speed and power consumption models are considered, namely, the idle-speed model and the constant-speed model. Our investigation includes justification of centralized management of computing resources, server speed constrained optimization, power constrained performance optimization, and performance constrained power optimization. Our main results are (1) cores should be managed in a centralized way to provide the highest performance without consumption of more energy in cloud computing; (2) for a given server speed constraint, fewer high-speed cores perform better than more low-speed cores; furthermore, there is an optimal selection of server size and core speed which can be obtained analytically, such that a multicore server processor consumes the minimum power; (3) for a given power consumption constraint, there is an optimal selection of server size and core speed which can be obtained numerically, such that the best performance can be achieved, i.e., the average task response time is minimized; (4) for a given task response time constraint, there is an optimal selection of server size and core speed which can be obtained numerically, such that minimum power consumption can be achieved while the given performance guarantee is maintained.  相似文献   

2.
We address scheduling independent and precedence constrained parallel tasks on multiple homogeneous processors in a data center with dynamically variable voltage and speed as combinatorial optimization problems. We consider the problem of minimizing schedule length with energy consumption constraint and the problem of minimizing energy consumption with schedule length constraint on multiple processors. Our approach is to use level-by-level scheduling algorithms to deal with precedence constraints. We use a simple system partitioning and processor allocation scheme, which always schedules as many parallel tasks as possible for simultaneous execution. We use two heuristic algorithms for scheduling independent parallel tasks in the same level, i.e., SIMPLE and GREEDY. We adopt a two-level energy/time/power allocation scheme, namely, optimal energy/time allocation among levels of tasks and equal power supply to tasks in the same level. Our approach results in significant performance improvement compared with previous algorithms in scheduling independent and precedence constrained parallel tasks.  相似文献   

3.
针对具有独立DVFS的多核处理器系统,提出了一种K线程低能耗模型的并行任务调度优化算法(Tasks Optimization based on Energy-Effectiveness Model,TO-EEM)。与传统的并行任务节能调度相比,该算法的主要目标是不仅通过降低处理器频率来减少处理器瞬时功耗,而且结合并行任务间的同步互斥所造成的线程阻塞情况,合理分配线程资源来减少线程同步时间,优化并行性能;保证任务在一定的并行加速比性能前提下,提高资源利用率,减少能耗,达到程序能耗和性能之间的折衷。文中进行了大量模拟实验,结果证明提出的任务优化模型算法节能效果明显,能有效降低处理器的功耗,并始终保持线性加速比。  相似文献   

4.
抢占阈值调度的功耗优化   总被引:2,自引:0,他引:2  
DVS(Dynamic Voltage Scaling)技术的应用使得任务执行时间延长进而使得处理器的静态功耗(由CMOS电路的泄露电流引起)迅速增加.延迟调度(Procrastination Scheduling)算法是近年提出用于减少静态功耗的有效方法,它通过推迟任务的正常执行来尽可能长时间地让处理器处于睡眠或关闭状态,从而避免过多的静态功耗泄露.文中针对可变电压处理器上运用抢占阈值调度策略的周期性任务集合,将节能调度和延迟调度结合起来,提出一种两阶段节能调度算法,先使用离线算法来计算每个任务的最优处理器执行速度,而后使用在线模拟调度算法来计算每个任务的延迟时间,从而动态判定处理器开启/关闭时刻.实例研究和仿真实验表明,作者的方法能够进一步降低抢占阈值任务调度算法的功耗.  相似文献   

5.
The developments of multi-core systems (MCS) have considerably improved the existing technologies in the field of computer architecture. The MCS comprises several processors that are heterogeneous for resource capacities, working environments, topologies, and so on. The existing multi-core technology unlocks additional research opportunities for energy minimization by the use of effective task scheduling. At the same time, the task scheduling process is yet to be explored in the multi-core systems. This paper presents a new hybrid genetic algorithm (GA) with a krill herd (KH) based energy-efficient scheduling technique for multi-core systems (GAKH-SMCS). The goal of the GAKH-SMCS technique is to derive scheduling tasks in such a way to achieve faster completion time and minimum energy dissipation. The GAKH-SMCS model involves a multi-objective fitness function using four parameters such as makespan, processor utilization, speedup, and energy consumption to schedule tasks proficiently. The performance of the GAKH-SMCS model has been validated against two datasets namely random dataset and benchmark dataset. The experimental outcome ensured the effectiveness of the GAKH-SMCS model interms of makespan, processor utilization, speedup, and energy consumption. The overall simulation results depicted that the presented GAKH-SMCS model achieves energy efficiency by optimal task scheduling process in MCS.  相似文献   

6.
Nowadays, multi-core processor is the main technology used in desktop PCs, laptop computers and mobile hardware platforms. As the number of cores on a chip keeps increasing, it adds up the complexity and impacts more on both power and performance of a processor. In multi-processors, the number of cores and various parameters, such as issue-width, number of instructions and execution time, are key design factors to balance the amount of thread-level parallelism and instruction-level parallelism. In this paper, we perform a comprehensive simulation study that aims to find the optimum number of processor cores in desktop/laptop computing processor models with shallow pipeline depth. This paper also explores the trade-off between the number of cores and different parameters used in multi-processors in terms of power–performance gains and analyzes the impact of 3D stacking on the design of simultaneous multi-threading and chip multiprocessing. Our analysis shows that the optimum number of cores varies with different classes of workloads, namely: SPEC2000, SPEC2006 and MiBench. Simulation study is presented using architectures with shorter pipeline depth, showing that (1) the optimum number of cores for power–performance is 8, (2) the optimum number of threads in the range [2, 4], and (3) for beyond 32 cores, multi-core processors are no longer efficient in terms of performance benefits and overall power consumption.  相似文献   

7.
多核系统中基于Global EDF 的在线节能实时调度算法   总被引:3,自引:1,他引:2  
张冬松  吴彤  陈芳园  金士尧 《软件学报》2012,23(4):996-1009
随着多核系统能耗问题日益突出,在满足时间约束条件下降低系统能耗成为多核实时节能调度研究中亟待解决的问题之一.现有研究成果基于事先已知实时任务属性的假设,而实际应用中,只有当任务到达之后才能够获得其属性.为此,针对一般任务模型,不基于任何先验知识提出一种多核系统中基于Global EDF在线节能硬实时任务调度算法,通过引入速度调节因子,利用松弛时间,结合动态功耗管理和动态电压/频率调节技术,降低多核系统中任务的执行速度,达到实时约束与能耗节余之间的合理折衷.所提出的算法仅在上下文切换和任务完成时进行动态电压/频率调节,计算复杂度小,易于在实时操作系统中实现.实验结果表明,该算法适用于不同类型的片上动态电压/频率调节技术,节能效果始终优于Global EDF算法,最多可节能15%~20%,最少可节能5%~10%.  相似文献   

8.
基于DVS机制的低能耗微处理器系统设计方法研究   总被引:3,自引:0,他引:3  
能耗已经成为微处理器设计的最大挑战之一。微处理器的能耗在便携设备中占有重要的比例。DVS(Dynamic Voltage Scaling)机制可以在设备运行过程中,通过降低处理器的工作电压来降低它的能耗。同时,还需降低处理器的速度。电压调度程序通过分析应用的约束和需求来给定适当的工作电压。文章论述了速度和输入电压可变的微处理器系统设计方法。在处理器低速工作时,降低工作电压可以大幅度降低它的能耗。这将使应用系统能快速地根据负荷的变化调节处理器的性能。  相似文献   

9.
With energy consumption becoming one of the first-class optimization parameters in computer system design, compilation techniques that consider performance and energy simultaneously are expected to play a central role. In particular, compiling a given application code under performance and energy constraints is becoming an important problem. In this paper, we focus on an on-chip multiprocessor architecture and present a set of code optimization strategies. We first evaluate an adaptive loop parallelization strategy (i.e., a strategy that allows each loop nest to execute using a different number of processors if doing so is beneficial) and measure the potential energy savings when unused processors during execution of a nested loop are shut down (i.e., placed into a power-down or sleep state). Our results show that shutting down unused processors can lead to as much as 67 percent energy savings at the expense of up to 17 percent performance loss in a set of array-intensive applications. To eliminate this performance penalty, we also discuss and evaluate a processor preactivation strategy based on compile-time analysis of nested loops. Based on our experiments, we conclude that an adaptive loop parallelization strategy combined with idle processor shut down and preactivation can be very effective in reducing energy consumption without increasing execution time. We then generalize our strategy and present an application parallelization strategy based on integer linear programming (ILP). Given an array-intensive application, our optimization strategy determines the number of processors to be used in executing each loop nest based on the objective function and additional compilation constraints provided by the user/programmer. Our initial experience with this constraint-based optimization strategy shows that it is very successful in optimizing array-intensive applications on on-chip multiprocessors under multiple energy and performance constraints.  相似文献   

10.
More computational power is offered by current real-time systems to cope with CPU intensive applications. However, this facility comes at the price of more energy consumption and eventually higher heat dissipation. As a remedy, these issues are being encountered by adjusting the system speed on the fly so that application deadlines are respected and also, the overall system energy consumption is reduced. In addition, the current state of the art of multi-core technology opens further research opportunities for energy reduction through power efficient scheduling. However, the multi-core front is relatively unexplored from the perspective of task scheduling. To the best of our knowledge, very little is known as of yet to integrate power efficiency component into real-time scheduling theory that is tailored for multi-core platforms. In this paper, we first propose a technique to find the lowest core speed to schedule individual tasks. The proposed technique is experimentally evaluated and the results show the supremacy of our test over the existing counterparts. Following that, the lightest task shifting policy is adapted for balancing core utilization, which is utilized to determine the uniform system speed for a given task set. The aforementioned guarantees that: (i) all the tasks fulfill their deadlines and (ii) the overall system energy consumption is reduced.  相似文献   

11.

With the rise of edge computing paradigms, multimedia applications will have to tackle unprecedented management issues, pursuing an optimal balance between performance, Quality of Service (QoS), and power consumption. In this paper, we investigate a novel paradigm to deploy multimedia elastic applications at the edge in a very energy-efficient manner. Our approach is based on pre-provisioning virtual resources that remain “frozen” until the application scales out. Frozen resources are treated in a special way by the infrastructure, leveraging aggressive power-saving mechanisms that keep negligible their impact on energy consumption and performance. We report extensive measurements on QoS and power consumption that we carried out in a real testbed, which is the first working implementation of the proposed paradigm. Our work shows how resource utilization and performance can be increased by leveraging SDN technologies and conscious setting of cloud parameters. We investigate the trade-off between performance and power consumption (i.e., energy efficiency), in relation to different consolidation strategies. Finally, we measure power consumption and estimate energy saving for an elastic video transcoding application deployed at the network edge.

  相似文献   

12.
Holistic datacenter energy minimization operation should consider interactions between computing and cooling source specific usage patterns. Decisions like workload type, server configuration, load, utilization etc., contributes to power consumption and influences datacenter's thermal profile and impacts the energy required to control temperature within operational thresholds. In this paper, we present an adaptive virtual machine placement and consolidation approach to improve energy efficiency of a cloud datacenter; accounting for server heterogeneity, server processor low-power SLEEP state, state transition latency and integrated thermal controls to maintain datacenter within operational temperature. Our proposed heuristic approach reduces energy consumption with acceptable level of performance.  相似文献   

13.
Corollaries to Amdahl's Law for Energy   总被引:1,自引:0,他引:1  
This paper studies the important interaction between parallelization and energy consumption in a parallelizable application. Given the ratio of serial and parallel portion in an application and the number of processors, we first derive the optimal frequencies allocated to the serial and parallel regions in the application to minimize the total energy consumption, while the execution time is preserved (i.e., speedup = 1). We show that dynamic energy improvement due to parallelization has a function rising faster with the increasing number of processors than the speed improvement function given by the well-known Amdahl's Law. Furthermore, we determine the conditions under which one can obtain both energy and speed improvement, as well as the amount of improvement. The formula we obtain capture the fundamental relationship between parallelization, speedup, and energy consumption and can be directly utilized in energy aware processor resource management. Our results form a basis for several interesting research directions in the area of power and energy aware parallel processing.  相似文献   

14.
为了追求更高的性能,处理器核的主频不断提升,处理器核的设计日益复杂,随之而来的是功耗问题越来越突出。除了在工艺级和电路级采用低功耗技术外,在逻辑设计阶段通过分析处理器核各个功能模块的特点并采用相应的技术手段,也可以有效降低功耗。对一款乱序超标量处理器核中功耗比较突出的模块——寄存器文件和再定序缓冲——进行了逻辑设计优化,在程序运行性能几乎不受影响的情况下明显减少了面积,降低了功耗。  相似文献   

15.
Low power consumption and high computational performance are two important processor design goals for IoT applications. Achieving both design goals in one processor architecture is challenging due to their conflicting requirements. This paper introduces a reconfigurable micro-architectural level technique that allows a Reduced Instruction Set Computing (RISC) processor to support IoT applications with different performance and energy trade-off requirements. The processor can be reconfigured into either multi-cycle execution mode (low computational speed with low dynamic power consumption) or pipeline execution mode (high computational speed at the expense of high dynamic power), based on dynamic workload characteristics in IoT applications. Switching between modes is accomplished by exploiting the partial reconfiguration (PR) feature offered by the recent advancements in modern FPGAs. A RISC processor was designed based on the proposed micro-architectural level technique and implemented on FPGA as IoT sensor node. Experimental results demonstrate that the proposed technique with reconfigurable micro-architecture is able to significantly reduce the dynamic energy consumption, compared to conventional multi-cycle and pipeline only micro-architectures, while allowing better performance-energy trade-off in IoT applications.  相似文献   

16.
刘开南 《计算机应用》2019,39(11):3333-3338
为了节省云数据中心的能量消耗,提出了几种基于贪心算法的虚拟机(VM)迁移策略。这些策略将虚拟机迁移过程划分为物理主机状态检测、虚拟机选择和虚拟机放置三个步骤,并分别在虚拟机选择和虚拟机放置步骤中采用贪心算法予以优化。提出的三种迁移策略分别为:最小主机使用效率选择且最大主机使用效率放置算法MinMax_Host_Utilization、最大主机能量使用选择且最小主机能量使用放置算法MaxMin_Host_Power_Usage、最小主机计算能力选择且最大主机计算能力放置算法MinMax_Host_MIPS。针对物理主机处理器使用效率、物理主机能量消耗、物理主机处理器计算能力等指标设置最高或者最低的阈值,参考贪心算法的原理,在指标上超过或者低于这些阈值范围的虚拟机都将进行迁移。利用CloudSim作为云数据中心仿真环境的测试结果表明,基于贪心算法的迁移策略与CloudSim中已存在的静态阈值迁移策略和绝对中位差迁移策略比较起来,总体能量消耗少15%,虚拟机迁移次数少60%,平均SLA违规率低5%。  相似文献   

17.
随着计算机技术的飞速发展,多核处理器已得到广泛的应用。本文详细介绍了某高性能计算机中多核处理器的电压调节模块的实现方法,并对主电路、输出滤波器、反馈补偿电路等部分进行了详细设计和参数计算。应用结果表明,该电压调节模块完全满足多核处理器的供电要求。  相似文献   

18.
动态可重配置技术因其所具有的高性能,低功耗和高度灵活性等特点,已经成为研究的热点。本文从动态可重配置处理器技术的基本概念,产生背景,实现方案分类等方面进行了介绍。提出了一种多核动态可重配置处理器设计方案。并简述了该技术目前存在的问题,展望了未来的研究方向。  相似文献   

19.

Balancing energy–performance trade-offs for smartphone processor operations is undergoing intense research considering the challenges with the evolving technology of mobile computing. However, to guarantee energy-efficient processor operation, layout, and architecture, it is necessary to identify and integrate optimization techniques and parameters influencing energy–performance trade-off in various processor activity domains. Existing literature on energy optimization in smartphones focuses primarily on individual sub-domains such as OS, GPU, and cloud offloading methods. It reflects multiple smartphone processor activities domains as being the most discussed but less integrated. Through this study, we intend to provide the current state-of-the-art energy optimization techniques for smartphone processor operations. It also classifies multiple energy-draining processor operations along with their thorough discussion of methodologies and popular optimization techniques. The study models smartphone processor sub-components highlighting conventional techniques and performance parameters among its varied domains affecting the device’s energy performance along with significant energy drain minimization without any serious performance degradation. The study analyzes these approaches in the context of applicability, performance, and expected future demands along with revealing limitations of those approaches and open research issues prevailing in the available literature. Finally, we conclude our study by summarizing the current state of the art for smartphone processor activities power consumption.

  相似文献   

20.
王冶  张盛兵  王党辉 《计算机工程》2012,38(1):268-269,272
为降低微处理器中片上Cache的能耗,设计一种基于预缓冲机制的指令Cache。通过预缓冲控制部件的预测,使处理器需要的指令尽可能在缓冲区命中,从而避免访问指令Cache所造成的功耗。对7个测试程序的仿真结果表明,预缓冲机制能节省23.23%的处理器功耗,程序执行性能平均提升7.53%。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号