首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到14条相似文献,搜索用时 0 毫秒
1.
To reduce the environmental impact, it is essential to make data centers green, by turning off servers and tuning their speeds for the instantaneous load offered, that is, determining the dynamic configuration in web server clusters. We model the problem of selecting the servers that will be on and finding their speeds through mixed integer programming; we also show how to combine such solutions with control theory. For proof of concept, we implemented this dynamic configuration scheme in a web server cluster running Linux, with soft real-time requirements and QoS control, in order to guarantee both energy-efficiency and good user experience. In this paper, we show the performance of our scheme compared to other schemes, a comparison of a centralized and a distributed approach for QoS control, and a comparison of schemes for choosing speeds of servers.  相似文献   

2.
Information and communication technology (ICT) has a profound impact on environment because of its large amount of CO2 emissions. In the past years, the research field of “green” and low power consumption networking infrastructures is of great importance for both service/network providers and equipment manufacturers. An emerging technology called Cloud computing can increase the utilization and efficiency of hardware equipment. The job scheduler is needed by a cloud datacenter to arrange resources for executing jobs. In this paper, we propose a scheduling algorithm for the cloud datacenter with a dynamic voltage frequency scaling technique. Our scheduling algorithm can efficiently increase resource utilization; hence, it can decrease the energy consumption for executing jobs. Experimental results show that our scheme can reduce more energy consumption than other schemes do. The performance of executing jobs is not sacrificed in our scheme. We provide a green energy-efficient scheduling algorithm using the DVFS technique for Cloud computing datacenters.  相似文献   

3.
Developing energy-efficient clusters not only can reduce power electricity cost but also can improve system reliability. Existing scheduling strategies developed for energy-efficient clusters conserve energy at the cost of performance. The performance problem becomes especially apparent when cluster computing systems are heavily loaded. To address this issue, we propose in this paper a novel scheduling strategy–adaptive energy-efficient scheduling or AEES–for aperiodic and independent real-time tasks on heterogeneous clusters with dynamic voltage scaling. The AEES scheme aims to adaptively adjust voltages according to the workload conditions of a cluster, thereby making the best trade-offs between energy conservation and schedulability. When the cluster is heavily loaded, AEES considers voltage levels of both new tasks and running tasks to meet tasks’ deadlines. Under light load, AEES aggressively reduces the voltage levels to conserve energy while maintaining higher guarantee ratios. We conducted extensive experiments to compare AEES with an existing algorithm–MEG, as well as two baseline algorithms–MELV, MEHV. Experimental results show that AEES significantly improves the scheduling quality of MELV, MEHV and MEG.  相似文献   

4.
Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This paper advocates for a runtime assessment of such overheads by means of characterizing point-to-point communications into phases followed by analyzing the time gaps between the communication calls. Certain communication and architectural parameters are taken into consideration in the three proposed frequency scaling strategies, which differ with respect to their treatment of the time gaps. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. For the latter, three different process-to-core mappings were studied as to their energy savings under the proposed frequency scaling strategies and under the existing state-of-the-art techniques. Close to the maximum energy savings were obtained with a low performance loss of 2% on the given platform.  相似文献   

5.
As semiconductor manufacturing technology continues to improve, it is possible to integrate more and more transistors onto a single processor. Many-core processor design has resulted in part from the search to utilize this enormous transistor real estate. The Single-Chip Cloud Computer (SCC) is an experimental many-core processor created by Intel Labs. In this paper we present a study in which we analyze this innovative many-core system by running several workloads with distinctive parallelism characteristics. We investigate the effect on system performance by monitoring specific hardware performance counters. Then, we experiment on varying different hardware configuration parameters such as number of cores, clock frequency and voltage levels. We execute the chosen workloads and collect the timing, power consumption and energy consumption information on such a many-core research platform. Thus, we can comprehensively analyze the behavior and scalability of the Intel SCC system with the introduced workload in terms of performance and energy consumption. Our results show that the profiled parallel workload execution has a communication bottleneck on the Intel SCC system. Moreover, our results indicate that we should carefully choose the number of cores to execute different workloads in order to yield a balance between execution performance and energy efficiency for different applications.  相似文献   

6.
This work presents a scheduling algorithm to reduce the energy of hard real-time tasks with fixed priorities assigned in a rate-monotonic policy. Sets of independent tasks running periodically on a processor with dynamic voltage scaling (DVS) are considered as well. The proposed online approach can cooperate with many slack-time analysis methods based on low-power work demand analysis (lpWDA) without increasing the computational complexity of DVS algorithms. The proposed approach introduces a novel technique called low-power fluid slack analysis (lpFSA) that extends the analysis interval produced by its cooperative methods and computes the available slack in the extended interval. The lpFSA regards the additional slack as fluid and computes its length, such that it can be moved to the current job. Therefore, the proposed approach provides the cooperative methods with additional slack. Experimental results show that the proposed approach combined with lpWDA-based algorithms achieves more energy reductions than do the initial algorithms alone.  相似文献   

7.
王桂彬 《计算机学报》2012,35(5):979-989
作为众核体系结构的典型代表,GPU(Graphics Processing Units)芯片集成了大量并行处理核心,其功耗开销也在随之增大,逐渐成为计算机系统中功耗开销最大的组成部分之一,而软件低功耗优化技术是降低芯片功耗的有效方法.文中提出了一种模型指导的多维低功耗优化技术,通过结合动态电压/频率调节和动态核心关闭技术,在不影响性能的情况下降低GPU功耗.首先,针对GPU多线程执行模型的特点,建立了访存受限程序的功耗优化模型;然后,基于该模型,分别分析了动态电压/频率调节和动态核心关闭技术对程序执行时间和能量消耗的影响,进而将功耗优化问题归纳为一般整数规划问题;最后,通过对9个典型GPU程序的评测以及与已有方法的对比分析,验证了该文提出的低功耗优化技术可以在不影响性能的情况下有效降低芯片功耗.  相似文献   

8.
易会战  罗兆成 《软件学报》2013,24(8):1761-1774
当前,很多部门使用高性能计算机周期性地进行业务性的数值计算。维护这些业务系统的主要代价是每天消耗的大量电能,降低能量消耗能够极大地降低维护业务系统的成本。高性能业务系统的核心是微处理器,当前,微处理器普遍支持动态电压调节技术。该技术通过降低微处理器的电压和频率减小微处理器的能耗,但是一般会导致系统性能的下降。提出了一种面向高性能业务应用的能量优化技术。该技术利用系统支持的多个频率层次,建立性能约束下的能量优化模型,优化业务应用的能耗。根据程序信息获取方式的差别,提出了SEOM 和 CEOM 两种能量优化模型,SEOM模型的程序信息可以直接测试获取,CEOM的程序信息采用编译器插桩方法获取。使用典型平台对能耗优化效果进行了验证,最多可节省12%的能耗。  相似文献   

9.
针对由通用微处理器和专用加速部件构成的异构并行系统,提出结合通信感知的并行任务划分和动态电压频率调节技术的异构系统能耗优化方法,该方法旨在将并行任务图划分并映射在异构处理单元,在满足性能约束的条件下最小化系统能耗.在目前典型异构并行系统中,主处理器与加速部件大都通过系统总线连接,必然引入不可忽略的通信开销,因此通信感知的任务划分技术是该问题的关键.提出了基于整数线性规划的静态最优能耗优化方法和基于遗传算法的动态能耗优化方法.并通过一个典型科学计算应用验证了本文方法的有效性.  相似文献   

10.
We explore novel algorithms for DVS (Dynamic Voltage Scaling) based energy minimization of DAG (Directed Acyclic Graph) based applications on parallel and distributed machines in dynamic environments. Static DVS algorithms for DAG execution use the estimated execution time. The estimated time in practice is overestimated or underestimated. Therefore, many tasks may be completed earlier or later than expected during the actual execution. For overestimation, the extra available slack can be added to future tasks so that energy requirements can be reduced. For underestimation, the increased time may cause the application to miss the deadline. Slack can be reduced for future tasks to reduce the possibility of not missing the deadline. In this paper, we present novel dynamic scheduling algorithms for reallocating the slack for future tasks to reduce energy and/or satisfy deadline constraints. Experimental results show that our algorithms are comparable to static algorithms applied at runtime in terms of energy minimization and deadline satisfaction, but require considerably smaller computational overhead.  相似文献   

11.
PeiZong Lee 《Parallel Computing》1995,21(12):1895-1923
It is widely accepted that distributed memory parallel computers will play an important role in solving computation-intensive problems. However, the design of an algorithm in a distributed memory system is time-consuming and error-prone, because a programmer is forced to manage both parallelism and communication. In this paper, we present techniques for compiling programs on distributed memory parallel computers. We will study the storage management of data arrays and the execution schedule arrangement of Do-loop programs on distributed memory parallel computers. First, we introduce formulas for representing data distribution of specific data arrays across processors. Then, we define communication cost for some message-passing communication operations. Next, we derive a dynamic programming algorithm for data distribution. After that, we show how to improve the communication time by pipelining data, and illustrate how to use data-dependence information for pipelining data. Jacobi's iterative algorithm and the Gauss elimination algorithm for linear systems are used to illustrate our method. We also present experimental results on a 32-node nCUBE-2 computer.  相似文献   

12.
温度约束多核处理器最大稳态吞吐量分析   总被引:1,自引:0,他引:1  
随着多核处理器功耗密度的不断增大,温度约束条件下的性能分析已经成为多核处理器早期设计优化的重要组成部分.当处理器运行不同的任务时,处理器温度具有很大的差异性,但现有研究成果并没有考虑任务差异性对处理器性能的影响.针对采用动态频率电压调节作为温度管理技术的多核处理器,为了提高在温度约束条件下稳态吞吐量的分析准确性,考虑不同任务之间的差异性,提出一种新的最大吞吐量分析方法.将任务特征引入性能分析模型,论证了当多核处理器吞吐量达到最大值时各处理器核上任务特征之间的关系,将最大稳态吞吐量分析归结为线性规划问题.仿真实验结果表明,所提方法具有较好的分析准确性,任务特征对多核处理器最大吞吐量具有非常大的影响.  相似文献   

13.
Abstract— A novel pixel memory using an integrated voltage‐loss‐compensation (VLC) circuit has been proposed for ultra‐low‐power TFT‐LCDs, which can increase the number of gray‐scale levels for a single subpixel using an analog voltage gray‐scale technique. The new pixel with a VLC circuit is integrated under a small reflective electrode in a high‐transmissive aperture‐ratio (39%) 3.17‐in. HVGA transflective panel by using a standard low‐temperature‐polysilicon process based on 1.5‐μm rules. No additional process steps are required. The VLC circuit in each pixel enables simultaneous refresh with a very small change in voltage, resulting in a two‐orders‐of‐magnitude reduction in circuit power for a 64‐color image display. The advanced transflective TFT‐LCD using the newly proposed pixel can display high‐quality multi‐color images anytime and anywhere, due to its low power consumption and good outdoor readability.  相似文献   

14.
相对于对称多核处理器,非对称多核处理器具有更高的效能,将成为未来并行操作系统中的主流体系结构.对于非对称多核处理器上操作系统的并行任务调度问题,现有的研究假设所有核心频率恒定,缺乏理论分析,也没有考虑算法的效能和通用性.针对该问题,该文首先建立非线性规划模型,分析得出全面考虑并行任务同步特性、核心非对称性以及核心负载的调度原则.然后,基于调度原则提出一个集成调度算法,该算法通过集成线程调度和动态电压频率调整来提高效能,并通过参数调整机制实现了算法的通用性.提出的算法是第一个在非对称多核处理器上结合线程调度和动态电压频率调整的调度算法.实际平台上的实验表明:该算法可适用于多种环境,且效能比其他同类算法高24%~50%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号