首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 156 毫秒
1.
如今日益增长的数据中心功耗特别是冷却系统功耗已经被日益重视,降低系统功耗能够减少数据中心碳排放。传统负载聚合方法在降低计算功耗的同时,尽量保留数据中心的计算能力,却往往忽略了对冷却功耗的关注。不恰当的负载聚合会带来数据中心峰值温度的升高和冷却功耗的增加。文中首先根据数据中心温度功耗模型探讨了负载聚合的必要条件,接着提出了离线的遗传算法。最后文中以实际在线书店的访问数据进行因特网数据中心仿真,定量分析了基于遗传算法的负载聚合方法对系统温度、性能和功耗的影响。  相似文献   

2.
为满足用户对数据库集群系统高输入高输出应用的需求,设计一种采用中间件技术的数据库集群系统,并针对该系统提出一种基于Markov模型的数据库集群负载均衡算法。该算法在执行节点负载信息采样周期内,利用Markov模型预测集群系统各执行节点的负载信息状态,根据预测的执行节点负载信息对集群系统进行负载均衡。实验结果表明,该算法能够有效提高数据库集群的性能。  相似文献   

3.
云数据中心虚拟机的动态整合需要跟踪服务器的运行状态,而服务器的运行状态会受到数据中心负载变化的影响,现有的CPU使用率预测方法大都只关注当前服务器的CPU利用率变化。提出了一个基于Kalman滤波的CPU使用率预测模型,建立了基于所有服务器CPU使用率变化系数的数据中心负载变化模型,详细描述了基于Kalman滤波的CPU使用率预测方法,讨论了云数据中心的能耗和性能评价指标。最后,为了验证基于Kalman滤波的CPU使用率预测算法的有效性,在CloudSim仿真系统和PlanetLab的五个数据集上进行了实验。实验结果表明,Kalman滤波能够较好地反映服务器CPU使用率的变化趋势,有效地降低数据中心的能耗,并保持较好的计算性能。  相似文献   

4.
基于MapReduce虚拟集群的能耗优化算法   总被引:1,自引:0,他引:1  
随着全球能源危机的出现,许多研究者开始关注数据中心的能耗问题。在满足用户需求的前提下,减少数据中心的活跃节点个数能够有效地降低其能耗。传统的减少活跃节点的方式是虚拟机迁移,但虚拟机迁移会造成极大的系统开销。提出一种基于MapReduce虚拟集群的能耗优化算法--在线时间平衡算法OTBA,能够减少活跃物理节点数,有效降低数据中心的能耗,并且避免了虚拟机的迁移。通过建立云数据中心的能耗模型、用户提交服务的排队模型和评价作业完成质量的作业运行模型,确定了数据中心节能模型的目标函数和变量因子。在线时间平衡算法是基于虚拟云环境和在线MapReduce作业的一种节能调度算法,能够在虚拟机的生命周期和资源利用率之间做出权衡,使数据中心激活的服务器达到最少,能耗降到最低。此外,该结果通过仿真和Hadoop平台上的实验得到了验证。  相似文献   

5.
一种基于上下文预测蚁群模型的物联网感知层路由算法   总被引:1,自引:0,他引:1  
随着物联网关键技术的飞速发展,各种基于物联网的应用应运而生.智能楼宇是能够体现出物联网全面特点的一个重要典型应用.本文基于智能楼宇应用场景,在传统的蚁群模型中引入上下文感知技术,提出一种具有预测机制的物联网感知层路由算法CACRA.该算法通过考虑不同传输链路的距离、访问频率、跳数等上下文信息,使节点能够选择最优的下一跳路径,同时能够根据节点接收数据量的多少,自动调节算法中参数,达到均衡各个节点能耗的作用.通过在WSN网络下的仿真实验,并与经典的路由协议进行对比,证明该算法具有能量有效性、负载均衡、延长网络寿命等特点.  相似文献   

6.
实时功耗温度管理(DPTM)通过对任务的准确预测与合理调度,可以有效降低片上系统的运行能耗与峰值温度.为了获得更好的DPTM调度效果,文中提出了一种精确的组合式任务预测算法和一种任务调度算法VP-TALK,进而构建了一个完整的DPTM原型系统.为了对复杂任务进行精确的任务预测,文中DPTM系统先将复杂任务按频谱长短分类为随机/周期/趋势3种成分,然后采用灰色模型/傅里叶模型/径向基函数(RBF)神经网络模型分别对这3种成分进行组合分析,以获得精确的预测效果;基于精确预测的任务负载量,文中所提出的VP-TALK算法可以计算出最优电压-频率对的理想值,进而选择出两组与理想值相邻的电压-频率对,以获得两个现实的工作状态,并考虑核心温度和任务实时性的条件,VP-TALK算法将任务负载分配到这两个工作状态,以获得最优的DPTM效果;最后基于机器学习方法,综合4种源算法构建了一套完整的DPTM原型系统.实验结果表明:(1)文中系统的任务预测组合方法的平均误差仅为2.89%;(2)在相同的设定峰值温度约束下,与已有调度算法的能耗值相比,尽管假设了更为敏感的功率-温度影响关系,但对于较高的工作负载率,文中所提出的VP-TALK调度算法仍能够获得平均14.33%的能耗降低;(3)文中所提出的DPTM原型系统可以获得接近于理想状态的能耗优化效果.  相似文献   

7.
李俊祺  林伟伟  石方  李克勤 《软件学报》2022,33(11):3944-3966
数据中心的虚拟机(virtual machine,VM)整合技术是当今云计算领域的一个研究热点.要在保证服务质量(QoS)的前提下尽可能地降低云数据中心的服务器能耗,本质上是一个多目标优化的NP难问题.为了更好地解决该问题,面向异构服务器云环境提出了一种基于差分进化与粒子群优化的混合群智能节能虚拟机整合方法(HSI-VMC).该方法包括基于峰值效能比的静态阈值超载服务器检测策略(PEBST)、基于迁移价值比的待迁移虚拟机选择策略(MRB)、目标服务器选择策略、混合离散化启发式差分进化粒子群优化虚拟机放置算法(HDH-DEPSO)以及基于负载均值的欠载服务器处理策略(AVG).其中,PEBST,MRB,AVG策略的结合能够根据服务器的峰值效能比和CPU的负载均值检测出超载和欠载服务器,并选出合适的虚拟机进行迁移,降低负载波动引起的服务水平协议违约率(SLAV)和虚拟机迁移的次数;HDH-DEPSO算法结合DE和PSO的优点,能够搜索出更优的虚拟机放置方案,使服务器尽可能地保持在峰值效能比下运行,降低服务器的能耗开销.基于真实云环境数据集(PlanetLab/Mix/Gan)的一系列实验结果表明:HSI-VMC方法与当前主流的几种节能虚拟机整合方法相比,能够更好地兼顾多个QoS指标,并有效地降低云数据中心的服务器能耗开销.  相似文献   

8.
无线传感器网络的最大局限是能量有限.为了高效利用网络能量、均衡网络负载,提出了一种基于能量与能耗速度的分簇算法.其中节点能耗速度是一个带有能耗预测信息的参量,利用它可以更有效地优化簇头选择与簇规模,该算法根据这两个参数来优化簇头的选择,能有效地延长节点的生存时间;同时,根据簇头节点与基站的距离、当前能量和能耗速度对簇规模进行约束和优化,进一步保证了簇之间的负载均衡.仿真实验表明改进后的算法有效地延长了网络的生存时间.  相似文献   

9.
针对牵引电机冷却风机驱动系统中能耗过高的现象,研究以牵扯引电机温度变化来控制冷却风机的变频节能控制系统。该控制系统以牵引电机温度作为控制对象,采用模糊控制方法来实时调节冷却风机的转速,从而实现节能运行。利用Simulink建立控制系统的仿真模型,计算电机在温度变化时冷却风机的频率,分析出变频控制的节能效果。仿真结果表明,所用方法可以降低能耗,具有较好的节能效果,且该方法在牵引电机温控过程中具有较强的抗干扰性和良好的过渡性。  相似文献   

10.
针对当前数据中心服务器能耗优化和虚拟机迁移时机合理性问题,提出一种基于动态调整阈值(DAT)的虚拟机迁移算法。该算法首先通过统计分析物理机历史负载数据动态地调整虚拟机迁移的阈值门限,然后通过延时触发和预测物理机的负载趋势确定虚拟机迁移时机。最后将该算法应用到实验室搭建的数据中心平台上进行实验验证,结果表明基于DAT的虚拟机迁移算法比静态阈值法关闭的物理机数量更多,云数据中心能耗更低。基于DAT的虚拟机迁移算法能根据物理机的负载变化动态迁移虚拟机,达到提高物理机资源利用率、降低数据中心能耗、提高虚拟机迁移效率的目的。  相似文献   

11.
High Performance Computing data centers have been rapidly growing, both in number and in size. Thermal management of data centers can address dominant problems associated with cooling such as the recirculation of hot air from the equipment outlets to their inlets, and the appearance of hot spots. In this paper, we are looking into assigning the incoming tasks to machines of a data center in such a way so as to affect the heat recirculation and make cooling more efficient. Using a low complexity linear heat recirculation model, we formulate the problem of minimizing the peak inlet temperature within a data center through task assignment, consequently leading to minimal cooling power consumption. We also provide two methods to solve the formulation, one that uses a genetic algorithm and the other that uses sequential quadratic programming. We show through formalization that minimizing the peak inlet temperature allows for the lowest cooling power needs. Results from a simulated, small-scale data center show that solving the formulation leads to an inlet temperature distribution that is 2 °C to 5 °C lower compared to other approaches, and achieves about 20%-30% cooling energy savings at moderate data center utilization rates. Moreover, our algorithms consistently outperform MinHR, a recirculation-reducing placement algorithm in the literature.  相似文献   

12.
虚拟化数据中心的制冷和供电设备能耗比重大且浪费严重,但当前虚拟化能耗优化的研究仅考虑IT设备能耗,针对该问题,通过对数据中心能耗逻辑的研究,提出一种虚拟化数据中心全局能耗优化调度方法。该方法通过感知数据中心负载和热分布状况,依据虚拟化调度规则生成动态调度策略,并对虚拟设备组的制冷供电设备进行同步调度,减少数据中心冗余制冷和设备空载损耗,以此最小化数据中心能耗。实验结果表明,该调度方法可节省制冷设备近26%的冗余制冷,并提升供电设备8%左右的供电效率,提高数据中心的能耗有效性,降低整体能耗。  相似文献   

13.
Data centers now play an important role in modern IT infrastructures. Related research shows that the energy consumption for data center cooling systems has recently increased significantly. There is also strong evidence to show that high temperatures in a data center will lead to higher hardware failure rates, and thus an increase in maintenance costs. This paper devotes itself in the field of thermal aware workload placement for data centers. In this paper, we propose an analytical model, which describes data center resources with heat transfer properties and workloads with thermal features. Then two thermal aware task scheduling algorithms, TASA and TASA-B, are presented which aim to reduce temperatures and cooling system power consumption in a data center. A simulation study is carried out to evaluate the performance of the proposed algorithms. Simulation results show that our algorithms can significantly reduce temperatures in data centers by introducing endurable decline in system performance.  相似文献   

14.
Data Centers are huge power consumers, both because of the energy required for computation and the cooling needed to keep servers below thermal redlining. The most common technique to minimize cooling costs is increasing data room temperature. However, to avoid reliability issues, and to enhance energy efficiency, there is a need to predict the temperature attained by servers under variable cooling setups. Due to the complex thermal dynamics of data rooms, accurate runtime data center temperature prediction has remained as an important challenge. By using Grammatical Evolution techniques, this paper presents a methodology for the generation of temperature models for data centers and the runtime prediction of CPU and inlet temperature under variable cooling setups. As opposed to time costly Computational Fluid Dynamics techniques, our models do not need specific knowledge about the problem, can be used in arbitrary data centers, re-trained if conditions change and have negligible overhead during runtime prediction. Our models have been trained and tested by using traces from real Data Center scenarios. Our results show how we can fully predict the temperature of the servers in a data rooms, with prediction errors below 2 °C and 0.5 °C in CPU and server inlet temperature respectively.  相似文献   

15.
时滞扩散性复杂网络同步保性能控制   总被引:6,自引:4,他引:2  
针对节点扩张的时滞复杂网络系统, 在节点扩张的条件下, 讨论此类系统的同步保性能控制问题. 首先采用自适应控制方法, 利用Lyapunov-Krasovskii稳定性理论,结合矩阵不等式的凸优化问题处理方法, 得出了时 滞复杂网络系统保性能控制器存在的充分条件; 当系统节点的扩张后, 在原有自适应控制器不能使系统同步稳定的条件下, 设计脉冲控制器, 利用牵制控制原理使系统达到稳定同步. 所设计的自适应动态反馈控制器在保证系统的渐近稳定条件下使系 统性能指标满足一定的要求. 最后给出一个数值仿真说明其有效性.  相似文献   

16.
Thermo-Fluids Provisioning of a High Performance High Density Data Center   总被引:1,自引:0,他引:1  
Consolidation and dense aggregation of slim compute, storage and networking hardware has resulted in high power density data centers. The high power density resulting from current and future generations of servers necessitates detailed thermo-fluids analysis to provision the cooling resources in a given data center for reliable operation. The analysis must also predict the impact on the thermo-fluid distribution due to changes in hardware configuration and building infrastructure such as a sudden failure in data center cooling resources. The objective of the analysis is to assure availability of adequate cooling resources to match the heat load, which is typically non-uniformly distributed and characterized by high-localized power density. This study presents an analysis of an example modern data center with a view of the magnitude of temperature variation and impact of a failure. Initially, static provisioning for a given distribution of heat loads and cooling resources is achieved to produce a reference state. A perturbation in reference state is introduced to simulate a very plausible scenario—failure of a computer room air conditioning (CRAC) unit. The transient model shows the “redlining” of inlet temperature of systems in the area that is most influenced by the failed CRAC. In this example high-density data center, the time to reach unacceptable inlet temperature is less than 80 seconds based on an example temperature set point limit of 40°C (most of today's servers would require an inlet temperature below 35°C to operate). An effective approach to resolve this issue, if there is adequate capacity, is to migrate the compute workload to other available systems within the data center to reduce the inlet temperature to the servers to an acceptable level. Recommended by: Ahmed Elmagarmid  相似文献   

17.
Job scheduling in data centers can be considered from a cyber–physical point of view, as it affects the data center’s computing performance (i.e. the cyber aspect) and energy efficiency (the physical aspect). Driven by the growing needs to green contemporary data centers, this paper uses recent technological advances in data center virtualization and proposes cyber–physical, spatio-temporal (i.e. start time and servers assigned), thermal-aware job scheduling algorithms that minimize the energy consumption of the data center under performance constraints (i.e. deadlines). Savings are possible by being able to temporally “spread” the workload, assign it to energy-efficient computing equipment, and further reduce the heat recirculation and therefore the load on the cooling systems. This paper provides three categories of thermal-aware energy-saving scheduling techniques: (a) FCFS-Backfill-XInt and FCFS-Backfill-LRH, thermal-aware job placement enhancements to the popular first-come first-serve with back-filling (FCFS-backfill) scheduling policy; (b) EDF-LRH, an online earliest deadline first scheduling algorithm with thermal-aware placement; and (c) an offline genetic algorithm for SCheduling to minimize thermal cross-INTerference (SCINT), which is suited for batch scheduling of backlogs. Simulation results, based on real job logs from the ASU Fulton HPC data center, show that the thermal-aware enhancements to FCFS-backfill achieve up to 25% savings compared to FCFS-backfill with first-fit placement, depending on the intensity of the incoming workload, while SCINT achieves up to 60% savings. The performance of EDF-LRH nears that of the offline SCINT for low loads, and it degrades to the performance of FCFS-backfill for high loads. However, EDF-LRH requires milliseconds of operation, which is significantly faster than SCINT, the latter requiring up to hours of runtime depending upon the number and size of submitted jobs. Similarly, FCFS-Backfill-LRH is much faster than FCFS-Backfill-XInt, but it achieves only part of FCFS-Backfill-XInt’s savings.  相似文献   

18.
云存储系统作为云计算的重要组成部分,是各种云计算服务的基础。但随云存储系统规模的不断扩大和在设计时对能耗因素的忽略,使其日益暴露出高能耗、低效率的问题。因为云存储系统占整个云计算中心能耗的27%~40%,所以无论从降低服务提供商的运营成本,还是从降低能耗以保护环境的角度出发,研究云存储系统中的节能技术都具有很大的现实意义与应用前景。将存储系统中的能耗优化问题分为基于硬件的节能方法与基于调度的节能方法两大类进行讨论;并将基于调度的节能方法分为基于节点调度、基于数据调度和基于缓存预取技术3类进行综合比较;最后,对适应节能的云存储体系结构、节能模式下的QoS保证、节能模式与计算模式的匹配以及纠删码容错技术下的节能研究4个方向进行了展望。  相似文献   

19.
在以人为载体的机会网络中,移动蓝牙设备有电池能量有限的特点。同时,在机会网络中,节点经常处于较长时间的互不连通状态,如何设计有效的蓝牙节点唤醒调度模式来降低能耗并确保不破坏网络现有的连通性是一个重要问题。提出了一种机会网络蓝牙设备唤醒调度策略BWM。该策略分析蓝牙设备电池能耗问题,建立了涉及能耗的蓝牙数据传输模型,并对休眠唤醒机制中的参数内在联系进行研究,以确保数据成功传输量为前提来对唤醒周期间隔长度进行控制。仿真实验结果表明,BWM在保证节点有效数据发送性能前提下节省了节点消耗的能量。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号