首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In energy-aware systems, it is critical to discuss how to reduce the total electric power consumption of information systems. In this paper, we consider communication type applications where a server transmits a large volume of data to clients. A client first selects a server in a cluster of servers and issues a file transmission request to the server. In our previous studies, the transmission power consumption (TPC) model and the extended TPC (ETPC) model of a server to transmit files to clients are proposed. In this paper, we newly propose the transmission power consumption laxity-based (TPCLB) algorithm for a cluster of servers, which are based on the TPC and ETPC models so that the total power consumption in a cluster of servers can be reduced. We evaluate the TPCLB algorithm in terms of the total power consumption and elapse time compared with the round-robin (RR) algorithm.  相似文献   

2.
This paper investigates into fault tolerance of cluster of servers and their energy efficiency to realize a reliable and energy aware server cluster system. A client issues a request to one server in a server cluster and the server sends a reply to the client in information systems. Once the server stops by fault, the client does not receive a reply of the request. Even if the request is performed on another server on detection of fault of the server, some QoS requirements like response time may not be satisfied. Hence, each request has to be redundantly performed on multiple servers to be tolerant of server faults. The redundant power consumption laxity-based (RPCLB) algorithm is discussed where multiple servers are selected to redundantly and energy-efficiently perform a request process in our previous studies. Since each application process is redundantly performed on more than one server, the larger amount of electric power is consumed. In this paper, we propose a novel and improved RPCLB (IRPCLB) algorithm to reduce the power consumption of servers, where once a process successfully terminates on one server, meaningless redundant processes are forced to terminate on the other servers. In the evaluation, we show the total power consumption of servers and total execution time of processes are reduced in homogeneous and heterogeneous types of clusters by the IRPCLB algorithm than the RPCLB and RR algorithms.  相似文献   

3.
As cloud computing continues to gain significance across fields, the energy consumption of datacentres creates new challenges in the design and operation of computer systems, with cooling remaining a key part of the total energy expenditure. We investigate the implications of increasing the room temperature setpoint in datacentres to save energy. For this, we develop a holistic model for the energy consumption of the server room that depends on user workload and service level agreement constraints, and that considers both cooling and computing energy dissipation. The model is applicable to a steady-state analysis of the system and brings insight into the impact of the most relevant parameters that affect the net energy consumption, such as the outside temperature, room temperature setpoint, and user demand. We analyse both static and dynamic server provisioning cases. In the latter case, a global power management scheme determines the optimal number of servers required to handle the incoming user demand to fulfil a target service level objective. Finally, we consider the extra energy needed to maintain service continuity under the expected higher server mortality rate due to warmer operational temperatures. Energy and temperature measurements acquired from a server machine running scientific benchmark programs allow to realistically fix model parameters for the study and to obtain pragmatic conclusions.  相似文献   

4.
针对云计算、大数据等应用对异构资源管理和聚合的需求,提出了一种融合架构云服务器体系结构及其关键支撑技术。融合架构云服务器利用硬件资源池化技术,实现计算、存储、网络、供电、制冷和管理模块的解耦与融合重构,具有高密度、低功耗、易扩展、易管理,易维护特点,兼具横向扩展和纵向扩展优势,可优化系统部署、运维和能耗成本,显著降低总体拥有成本(TCO)。在金融、电信和互联网行业的实际应用案例表明,融合架构云服务器功耗降低超过15%,总体拥有成本降低近15%,为云计算、大数据等应用提供了更具性能功耗比优势的IT基础设施设计方案。  相似文献   

5.
With rapid advances in mobile computing, multi-core processors and expanded memory resources are being made available in new mobile devices. This trend will allow a wider range of existing applications to be migrated to mobile devices, for example, running desktop applications in IA-32 (x86) binaries on ARM-based mobile devices transparently using dynamic binary translation (DBT). However, the overall performance could significantly affect the energy consumption of the mobile devices because it is directly linked to the number of instructions executed and the overall execution time of the translated code. Hence, even though the capability of today’s mobile devices will continue to grow, the concern over translation efficiency and energy consumption will put more constraints on a DBT for mobile devices, in particular, for thin mobile clients than that for severs. With increasing network accessibility and bandwidth in various environments, it makes many network servers highly accessible to thin mobile clients. Those network servers are usually equipped with a substantial amount of resources. This provides an opportunity for DBT on thin clients to leverage such powerful servers. However, designing such a DBT for a client/server environment requires many critical considerations.In this work, we looked at those design issues and developed a distributed DBT system based on a client/server model. It consists of two dynamic binary translators. An aggressive dynamic binary translator/optimizer on the server to service the translation/optimization requests from thin clients, and a thin DBT on each thin client to perform lightweight binary translation and basic emulation functions for its own. With such a two-translator client/server approach, we successfully off-load the DBT overhead of the thin client to the server and achieve a significant performance improvement over the non-client/server model. Experimental results show that the DBT of the client/server model could achieve 37% and 17% improvement over that of non-client/server model for x86/32-to-ARM emulation using MiBench and SPEC CINT2006 benchmarks with test inputs, respectively, and 84% improvement using SPLASH-2 benchmarks running two emulation threads.  相似文献   

6.
优化虚拟机部署是降低云数据中心能耗的有效方法,但是,过度对虚拟机部署进行合并可能导致主机机架出现热点,影响数据中心提供服务的可靠性。提出一种基于能效和可靠性的虚拟机部署算法。综合考虑主机利用率、主机温度、主机功耗、冷却系统功耗和主机可靠性间的相互关系,建立确保主机可靠性的冗余模型。在主动避免机架热点情况下,实现动态的虚拟机部署决策,在降低数据中心总体能耗前提下,确保主机服务可靠性。仿真结果表明,该算法不仅可以节省更多能耗,避免热点主机,而且性能保障上也更好。  相似文献   

7.
由于存在诸如CPU运算速度慢,电池容量低等问题,智能移动设备本身无法执行计算需求大的应用程序,需要借助边缘计算技术来降低程序对移动设备硬件的要求。然而将部分计算任务从移动设备传输给边缘服务器,会带来额外的传输能耗和服务器计算能耗。综合考虑影响移动设备和服务器,以及数据传输能耗值的四个因素,即移动设备的计算速度,下载数据功耗,数据卸载百分比和剩余网络带宽占,提出一种基于分层学习的粒子群算法,优化每台移动设备对于这四个参数的取值,更合理分配计算资源使得总能耗最小。对计算资源建模时,还考虑了最大能耗、计算周期、存储、带宽和延迟约束条件。与其他算法进行对比实验发现,通过分层学习优化的粒子群算法,能更快速地获得满足约束条件具有更低能耗的资源调度最优解。  相似文献   

8.
服务器执行任务产生的能耗是云计算系统动态能耗的重要组成部分。为降低云计算系统任务执行的总能耗,提出了一种基于能耗优化的最早完成时间任务调度方法,建立了服务器动态功率计算模型,基于动态功率的服务器执行能耗模型,以及云计算系统的能耗优化模型。调度策略根据任务的截止时间要求和在不同服务器上的执行能耗,选择不同的调度算法,以获得最小任务执行总能耗。实验结果证明,提出的任务调度方法,能够较好地满足任务截止时间的要求,降低云计算系统任务执行的总能耗。  相似文献   

9.
虚拟化数据中心的制冷和供电设备能耗比重大且浪费严重,但当前虚拟化能耗优化的研究仅考虑IT设备能耗,针对该问题,通过对数据中心能耗逻辑的研究,提出一种虚拟化数据中心全局能耗优化调度方法。该方法通过感知数据中心负载和热分布状况,依据虚拟化调度规则生成动态调度策略,并对虚拟设备组的制冷供电设备进行同步调度,减少数据中心冗余制冷和设备空载损耗,以此最小化数据中心能耗。实验结果表明,该调度方法可节省制冷设备近26%的冗余制冷,并提升供电设备8%左右的供电效率,提高数据中心的能耗有效性,降低整体能耗。  相似文献   

10.
We consider the problem of power and performance management for a multicore server processor in a cloud computing environment by optimal server configuration for a specific application environment. The motivation of the study is that such optimal virtual server configuration is important for dynamic resource provision in a cloud computing environment to optimize the power and performance tradeoff for certain specific type of applications. Our strategy is to treat a multicore server processor as an M/M/m queueing system with multiple servers. The system performance measures are the average task response time and the average power consumption. Two core speed and power consumption models are considered, namely, the idle-speed model and the constant-speed model. Our investigation includes justification of centralized management of computing resources, server speed constrained optimization, power constrained performance optimization, and performance constrained power optimization. Our main results are (1) cores should be managed in a centralized way to provide the highest performance without consumption of more energy in cloud computing; (2) for a given server speed constraint, fewer high-speed cores perform better than more low-speed cores; furthermore, there is an optimal selection of server size and core speed which can be obtained analytically, such that a multicore server processor consumes the minimum power; (3) for a given power consumption constraint, there is an optimal selection of server size and core speed which can be obtained numerically, such that the best performance can be achieved, i.e., the average task response time is minimized; (4) for a given task response time constraint, there is an optimal selection of server size and core speed which can be obtained numerically, such that minimum power consumption can be achieved while the given performance guarantee is maintained.  相似文献   

11.
一种基于CPU利用率的功率控制策略的研究与实现   总被引:1,自引:0,他引:1  
随着服务器能耗的快速增加,电力成本已经成为影响服务器总拥有成本中的主要部分。与此同时,目前大量X86架构服务器的平均利用率却一直保持在10%~15%,浪费了大量的电力资源。为了提升服务器的电力使用效率,本文提出一种基于CPU利用率的服务器功率控制策略,可以根据负载情况实时调整服务器的功率,在不影响用户体验的情况下减少电力消耗。该策略已经在曙光自适应节能系统(PowerConf.)中得到应用,获得了较好的节能效果,在一定程度上缓解了日益严峻的服务器能耗问题。  相似文献   

12.
The original purpose of the IEEE 802.11h standard was to extend WLAN operation in Europe to the 5 GHz band, where WLAN devices need dynamic frequency selection (DFS) and transmit power control (TPC) to coexist with the band's primary users, i.e. radar and satellite systems. Although 802.11h defines the mechanisms or protocols for DFS and TPC, it does not cover the implementations themselves. With some amendments to the 802.11h standard to address this need, the mechanisms for DFS and TPC can be used for many other applications with the benefits of automatic frequency planning, power consumption reduction, range control, interference reduction, and quality-of-service enhancement. MiSer is presented as a simple application example for saving significant communication energy for 802.11h devices.  相似文献   

13.
《Computer Networks》2007,51(17):4765-4779
Communication is usually the most energy-consuming event in Wireless Sensor Networks (WSNs). One way to significantly reduce energy consumption is applying transmission power control (TPC) techniques to dynamically adjust the transmission power. This article presents two new TPC techniques for WSNs. The experimental evaluation compares the performance of the TCP techniques with B-MAC, the standard MAC protocol of the Mica 2 platform. These experiments take into account different distances among nodes, concurrent transmissions and node mobility. The new transmission power control techniques decrease energy consumption by up to 57% over B-MAC while maintaining the reliability of the channel. Under a low mobility scenario, the proposed protocols delivered up to 95% of the packets, showing that such methods are able to cope with node movement. We also show that the contention caused by higher transmission levels might be lower than analytical models suggest, due to the action of the capture effect.  相似文献   

14.
杜辉  李卓  陈昕 《计算机科学》2022,49(3):23-30
在分层联邦学习中,能量受限的移动设备参与模型训练会消耗自身资源.为了降低移动设备的能耗,文中在不超过分层联邦学习的最大容忍时间下,提出了移动设备能耗之和最小化问题.不同训练轮次的边缘服务器能够选择不同的移动设备,移动设备也能够为不同的边缘服务器并发训练模型,因此文中基于在线双边拍卖机制提出了ODAM-DS算法.基于最优...  相似文献   

15.
传统网络架构部署下的边缘服务器难以满足大规模用户设备的接入和通信质量要求。为增加网络容量,提高频谱利用率,通过密集化基站的部署,构建一种面向超密集边缘计算网络的任务卸载优化模型。面对信道状态的变化、移动设备的动态需求以及服务器和频谱资源的有限性对任务卸载带来的挑战,结合任务类型和服务器的计算能力,并考虑信道状态变化、移动设备的动态需求以及干扰约束对卸载策略的影响,提出一种基于自适应模拟退火遗传(AGASA)算法的任务卸载方法,在满足任务截止期限的同时,对任务卸载能耗进行优化。同时,为得到最优上传功率,采用黄金分割算法解决功率控制问题,从而降低传输能耗。实验结果表明,AGASA算法在信道状态变化时可保证通信质量和计算效率,相比混合遗传粒子群算法,能够在满足截止期约束的同时使卸载能耗降低15.56%。  相似文献   

16.
Size and number of high-performance data centers are rapidly growing all around the world in recent years. The growth in the leakage power consumption of servers along with its exponential dependence on the ever increasing process variation in nanometer technologies has made it inevitable to move toward variation-aware power reduction strategies in data centers. In this paper, we address the problem of joint server placement and chassis consolidation to minimize power consumption of high-performance computing data centers under process variation. To this end, we introduce two variation-aware server placement heuristics as well as an integer linear programming (ILP)-based server placement method to find the best location of each server in the data center based on its power consumption and the data center heat recirculation model. We then incorporate a novel ILP-based variation-aware chassis consolidation technique to find the optimum task assignment solution under the obtained server placement approach to minimize total power consumption. Experimental results show that by applying the proposed joint variation-aware server placement and chassis consolidation techniques, up to 14.6 % improvement can be obtained at common data center utilization rates compared to state-of-the-art variation-unaware approaches.  相似文献   

17.
纯电动汽车车载动力电池是其唯一的动力源且很有限,辅助设备消耗的电能减少了纯电动汽车的续驶里程,尤其是空调,所以开发高效的纯电动汽车空调系统是纯电动汽车能够被市场接受的关键。将纯电动汽车顶盖布满太阳电池,可以使空调系统的制冷能力增强,同时还能增加纯电动汽车的行驶距离。对小型纯电动汽车的太阳能辅助空调系统进行研究,设计出适合该空调的自动控制系统,可为纯电动汽车创造出一个更加舒适的驾驶和乘座环境。  相似文献   

18.
可靠通信的多跳水声网络能量最小路径   总被引:1,自引:0,他引:1  
王琛  方彦军 《传感技术学报》2012,25(8):1153-1158
目前对水声网络能耗的研究并未考虑因数据发送失败而引起的数据重发导致的额外的能耗。研究在给定接收信噪比条件下可靠通信的多跳水声网络能量最小路径问题。首先建立了可靠通信的网络能量模型,通过曲线拟合的方法得到最优频率-距离关系的近似表达,以简化能耗模型;然后分析了可变发送功率和固定发送功率两种模式下的能量最小路径,从理论上证明了直线等距网络的总能耗最小,并给出了直线等距网络最优跳数和最优距离的求解方法。仿真结果验证了该理论的正确性。  相似文献   

19.
直流输电是我国“西电东送”战略规划的重要环节,也是输电技术的主要发展方向之一。直流输电中换流站内冷水结垢是影响输电系统稳定运行的主要因素。为了实时监测内冷水的水质情况,文章研制了一套换流站内冷水在线监测系统,能够在无人工干预的情况下,实时监测内冷水的PH值、铝离子和电导率等指标,综合分析后可以为换流站确定最佳运行条件提供依据,并起到故障预警的作用。现场试验测试表明,该套系统功能齐全、性能稳定。  相似文献   

20.
在边缘计算场景中,通过将部分待执行任务卸载到边缘服务器执行能够达到降低移动设备的负载、提升移动应用性能和减少设备开销的目的.对于时延敏感任务,只有在截止期限内完成才具有实际意义.但是边缘服务器的资源往往有限,当同时接收来自多个设备的数据传输及处理任务时,可能造成任务长时间的排队等待,导致部分任务因超时而执行失败,因此无法兼顾多个设备的性能目标.鉴于此,在计算卸载的基础上优化边缘服务器端的任务调度顺序.一方面,将时延感知的任务调度建模为一个长期优化问题,并使用基于组合多臂赌博机的在线学习方法动态调整服务器的调度顺序.另一方面,由于不同的任务执行顺序会改变任务卸载性能提升程度,因而影响任务卸载决策的有效性.为了增加卸载策略的鲁棒性,采用了带有扰动回报的深度Q学习方法决定任务执行位置.仿真算例证明了该策略可在平衡多个用户目标的同时减少系统的整体开销.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号