首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
There has been growing concern about energy consumption and environmental impact of datacenters. Some pioneers begin to power datacenters with renewable energy to offset carbon footprint. However, it is challenging to integrate intermittent renewable energy into datacenter power system. Grid-tied system is widely deployed in renewable energy powered datacenters. But the drawbacks (e.g. Harmonic disturbance and costliness) of grid tie inverter harass this design. Besides, the mixture of green load and brown load makes powermanagement heavily depend on software measurement and monitoring, which often suffers inaccuracy. We propose DualPower, a novel power provisioning architecture that enables green datacenters to integrate renewable power supply without grid tie inverters. To optimize DualPower operation, we propose a specially designed power management framework to coordinate workload balancing with power supply switching. We evaluate three optimization schemes (LM, PS and JO) under different datacenter operation scenarios on our trace-driven simulation platform. The experimental results show that DualPower can be as efficient as grid-tied system and has good scalability. In contrast to previous works, DualPower integrates renewable power at lower cost and maintains full availability of datacenter servers.  相似文献   

2.
With the continuous development of the payment market, the data structure characteristics of new business forms such as mobile Internet have changed significantly, and intelligent cloud data center is the general trend of development in the current indus- try. This paper proposes an artificial intelligence method and system design for dynamic scheduling of cloud resources based on busi- ness prediction. The resource availability of daily physical machines has changed over time, and it is necessary to reintegrate the re- sources in order to save energy and meet the requirements of service. In the early stage of large-scale marketing, capacity analysis is combined to make prediction in advance, and intelligent multi-dimensional capacity decision expansion based on artificial intelli- gence and machine self-learning is adopted. The dynamic migration and integration method of virtual machines in cloud data centers with high energy efficiency provides a new solution for improving energy efficiency of cloud computing data centers, ensuring sys- tem reliability and reducing operation and maintenance costs of cloud data centers.  相似文献   

3.
Cloud computing services have recently become a ubiquitous service delivery model, covering a wide range of applications from personal file sharing to being an enterprise data warehouse. Building green data center networks providing cloud computing services is an emerging trend in the Information and Communication Technology (ICT) industry, because of Global Warming and the potential GHG emissions resulting from cloud services. As one of the first worldwide initiatives provisioning ICT services entirely based on renewable energy such as solar, wind and hydroelectricity across Canada and around the world, the GreenStar Network (GSN) was developed to dynamically transport user services to be processed in data centers built in proximity to green energy sources, reducing Greenhouse Gas (GHG) emissions of ICT equipments. Regarding the current approach, which focuses mainly in reducing energy consumption at the micro-level through energy efficiency improvements, the overall energy consumption will eventually increase due to the growing demand from new services and users, resulting in an increase in GHG emissions. Based on the cooperation between Mantychore FP7 and the GSN, our approach is, therefore, much broader and more appropriate because it focuses on GHG emission reductions at the macro-level. This article presents some outcomes of our implementation of such a network model, which spans multiple green nodes in Canada, Europe and the USA. The network provides cloud computing services based on dynamic provision of network slices through relocation of virtual data centers.  相似文献   

4.
Energy-efficient data centers   总被引:1,自引:0,他引:1  
Energy consumption of the Information and Communication Technology (ICT) sector has grown exponentially in recent years. A major component of the today’s ICT is constituted by the data centers which have experienced an unprecedented growth in their size and population, recently. The Internet giants like Google, IBM and Microsoft house large data centers for cloud computing and application hosting. Many studies, on energy consumption of data centers, point out to the need to evolve strategies for energy efficiency. Due to large-scale carbon dioxide ( $\mathrm{CO}_2$ ) emissions, in the process of electricity production, the ICT facilities are indirectly responsible for considerable amounts of green house gas emissions. Heat generated by these densely populated data centers needs large cooling units to keep temperatures within the operational range. These cooling units, obviously, escalate the total energy consumption and have their own carbon footprint. In this survey, we discuss various aspects of the energy efficiency in data centers with the added emphasis on its motivation for data centers. In addition, we discuss various research ideas, industry adopted techniques and the issues that need our immediate attention in the context of energy efficiency in data centers.  相似文献   

5.
The use of High Performance Computing (HPC) in commercial and consumer IT applications is becoming popular. HPC users need the ability to gain rapid and scalable access to high-end computing capabilities. Cloud computing promises to deliver such a computing infrastructure using data centers so that HPC users can access applications and data from a Cloud anywhere in the world on demand and pay based on what they use. However, the growing demand drastically increases the energy consumption of data centers, which has become a critical issue. High energy consumption not only translates to high energy cost which will reduce the profit margin of Cloud providers, but also high carbon emissions which are not environmentally sustainable. Hence, there is an urgent need for energy-efficient solutions that can address the high increase in the energy consumption from the perspective of not only the Cloud provider, but also from the environment. To address this issue, we propose near-optimal scheduling policies that exploit heterogeneity across multiple data centers for a Cloud provider. We consider a number of energy efficiency factors (such as energy cost, carbon emission rate, workload, and CPU power efficiency) which change across different data centers depending on their location, architectural design, and management system. Our carbon/energy based scheduling policies are able to achieve on average up to 25% of energy savings in comparison to profit based scheduling policies leading to higher profit and less carbon emissions.  相似文献   

6.
The power system is presently experiencing vital changes: it is advancing from a centralized structure to a decentralized one, primarily because of the enormous advancement of distributed renewable energy sources, so future power system obliges new control strategies. These systems must have the capacity to withstand new requirements, for example, the exceedingly disseminated nature, the irregularity of renewable energy sources and the restricted data transfer capacity for communications. Multi agent systems (MAS) have attributes that meet these prerequisites. A certain degree of distributed or collective intelligence can be accomplished through the connection of these agents with one another, participating or contending to achieve their objectives. This paper introduces outline of the fundamental ideas of MAS and its different platforms. Also, it provides a comprehensive survey on the power system applications in which MAS technique has been applied. For each power system application, technical details are also discussed.  相似文献   

7.

The demand for higher computing power increases and, as a result, also leads to an increased demand for services hosted in cloud computing environments. It is known, for example, that in 2018 more than 4 billion people made daily access to these services through the Internet, corresponding to more than half of the world’s population. To support such services, these clouds are made available by large data centers. These systems are responsible for the increasing consumption of electricity, given the increasing number of accesses, increasing the demand for greater communication capacity, processing and high availability. Since electricity is not always obtained from renewable resources, the relentless pursuit of cloud services can have a significant environmental impact. In this context, this paper proposes an integrated and dynamic strategy that demonstrates the impact of the availability of data center architecture equipment on energy consumption. For this, we used the technique of modeling colored Petri nets (CPN), responsible for quantifying the cost, environmental impact and availability of the electricity infrastructure of the data centers under analysis. Such proposed models are supported by the developed tool, where data center designers do not need to know CPN to compute the metrics of interest. A case study was proposed to show the applicability of the proposed strategy. Significant results were obtained, showing an increase in system availability of 100%, with equivalents operating cost and environmental impact.

  相似文献   

8.
The main technological barrier in relying solely on renewable energy resources is that the sources such as wind and solar are highly intermittent in availability and result in uncertainty in demand satisfaction. This paper focuses on the integration of these uncertain renewable energy sources along with relatively deterministic energy sources such as reformer based fuel cell and battery. The power mix scenario between these multiple renewable energy sources along with the reformer based fuel cell system, coupled with an energy storage option is envisaged in this paper to ensure undisrupted power supply, to combat the possible intermittent nature of these renewable sources. An appropriate scheduling layer which provides a detailed plan of the optimum contribution of the various available power sources is considered over one week (7 days) duration. A model predictive control (MPC) scheme is deployed at the lower level control layer that receives a measurement of the possible fluctuations or uncertainties in the renewable power sources and maintains a smooth operation of the power generation system through appropriate decisions on generation via the reformer based fuel cell or by exploiting the battery storage, to ensure a delay-free delivery of power to the external load. During real-time operation of the plant, due to the uncertainties in the contribution from solar and wind sources, the power demanded from the fuel cell and the battery is varied accordingly by the MPC layer to meet the overall power demand. The performance of the designed MPC to maintain a smooth delivery of power in both the absence and presence of uncertainties in the renewable energy sources, with and without a reactive feedback between the scheduling and control layers, is illustrated using case studies.  相似文献   

9.
充分挖掘电力系统潜在的灵活性资源可有效提高系统中可再生能源的消纳能力。碳捕集电厂可降低电力系统中碳排放,同时也提供潜在的灵活调节能力。本文建立了需求响应与碳捕集电厂的灵活调节模型,提出了计及碳捕集电厂灵活运行与需求响应的多时间尺度调度方法,并将其应用到电力系统的调度运行中,实现系统运行总成本的最小化。在日前调度阶段,采用需求响应、碳捕集电厂灵活调节来时移负荷需求,以实现系统低碳经济运行的目标;在日内调度阶段,采用预测控制模型来校正日前调度计划,以保证实时功率平衡。结果显示,需求响应和碳捕集电厂灵活运行可提高系统的可再生能源消纳量,同时可分别降低18.7%和1.4%的总成本,协同碳捕集电厂与需求响应可降低系统总成本20.1%,效果显著。  相似文献   

10.
Designing eco-friendly system has been at the forefront of computing research. Faced with a growing concern about the server energy expenditure and the climate change, both industry and academia start to show high interest in computing systems powered by renewable energy sources. Existing proposals on this issue mainly focus on optimizing resource utilization or workload performance. The key supporting hardware structures for cross-layer power management and emergency handling mechanisms are often left unexplored. This paper presents GreenPod, a research framework for exploring scalable and dependable renewable power management in datacenters. An important feature of GreenPod is that it enables joint management of server power supplies and virtualized server workloads. Its interactive communication portal between servers and power supplies allows dataeenter operators to perform real-time renewable energy driven load migration and power emergency handling. Based on our system prototype, we discuss an important topic: virtual machine (VM) workloads survival when facing extended utility outage and insufficient onsite renewable power budget. We show that whether a VM can survive depends on the operating frequencies and workload characteristics. The proposed framework can greatly encourage and facilitate innovative research in dependable green computing.  相似文献   

11.
Every time an Internet user downloads a video, shares a picture, or sends an email, his/her device addresses a data center and often several of them. These complex systems feed the web and all Internet applications with their computing power and information storage, but they are very energy hungry. The energy consumed by Information and Communication Technology (ICT) infrastructures is currently more than 4% of the worldwide consumption and it is expected to double in the next few years. Data centers and communication networks are responsible for a large portion of the ICT energy consumption and this has stimulated in the last years a research effort to reduce or mitigate their environmental impact. Most of the approaches proposed tackle the problem by separately optimizing the power consumption of the servers in data centers and of the network. However, the Cloud computing infrastructure of most providers, which includes traditional telcos that are extending their offer, is rapidly evolving toward geographically distributed data centers strongly integrated with the network interconnecting them. Distributed data centers do not only bring services closer to users with better quality, but also provide opportunities to improve energy efficiency exploiting the variation of prices in different time zones, the locally generated green energy, and the storage systems that are becoming popular in energy networks. In this paper, we propose an energy aware joint management framework for geo-distributed data centers and their interconnection network. The model is based on virtual machine migration and formulated using mixed integer linear programming. It can be solved using state-of-the art solvers such as CPLEX in reasonable time. The proposed approach covers various aspects of Cloud computing systems. Alongside, it jointly manages the use of green and brown energies using energy storage technologies. The obtained results show that significant energy cost savings can be achieved compared to a baseline strategy, in which data centers do not collaborate to reduce energy and do not use the power coming from renewable resources.  相似文献   

12.
According to the Turkey’s energy supply policy, the main target on electricity generation is to increase the share of renewable-energy usage to above 30% by the year 2023. However, the steps taken to achieve this goal are being realized away from a long-term generation planning. The aim of this study is to examine the situation of renewable-energy sources, considering the Turkey’s generation planning and determine its contribution to overall generation together with its electrical and economic consequences. In the study, optimization of generation expansion planning (GEP), which covers the 2012–2027 planning horizon, was performed by using genetic algorithm (GA). The share of renewable sources to be installed for all source types was determined as 8.08%. Furthermore, the effects of reserve capacity level and rate of constraints on sources were investigated. The share of these sources can be increased by encouragements like tax deductions, mandatory quota application or carbon emission trade.  相似文献   

13.
Embedded systems provide limited storage capacity. This limitation conflicts with the demands of modern virtual machine platforms, which require large amounts of library code to be present on each client device. These conflicting requirements are often resolved by providing specialized embedded versions of the standard libraries, but even these stripped down libraries consume significant resources.We present a solution for “always connected” mobile devices based on a zero footprint client paradigm. In our approach, all code resides on a remote server. Only those parts of applications and libraries that are likely to be needed are transferred to the mobile client device. Since it is difficult to predict statically which library parts will be needed at run time, we combine static analysis, opportunistic off-target linking and lazy code loading to transfer code with a high likelihood of execution ahead of time while the other code, such as exception code, remains on the server and is transferred only on demand. This allows us to perform not only dead code elimination, but also aggressive elimination of unused code.The granularity of our approach is flexible from class files all the way down to individual basic blocks. Our method achieves total code size reductions of up to 95%.  相似文献   

14.

In recent years, data centers (DCs) have evolved a lot, and this change is related to the advent of cloud computing, e-commerce, services aimed at social networks, and big data. Such architectures demand high availability, reliability, and performance at satisfactory service levels; requirements are often neglected at the expense of high costs. In addition, the use of techniques capable of promoting greater environmental sustainability is most often forgotten in the design phase of such architectures. Approaches to perform an integrated assessment of dependability attributes for DCs, in general, are not trivial. Thus, this work presents the dependability attributes (availability and reliability), performability, and sustainability parameters that need special attention in implementing a cooling subsystem in DCs. That is one of the most cost generators for these infrastructures. In this study, we use the hypothetical-deductive method through a quantitative and qualitative approach; as for the procedure, it is bibliographical research through the review of scientific studies, and the research objectives are exploratory in nature. The results show that among all the papers selected and analyzed in this systematic literature review (SLR), none have jointly addressed performability, dependability, and sustainability in cooling systems for DCs. The main results of this work are presented through research questions, as they bring evidence of gaps to be addressed in the area. The four research questions point out challenges in implementing cooling systems in DCs and present the techniques and/or methods most used to propose or analyze data center cooling infrastructures; addressing the essential sustainability requirements for cooling subsystems, and finally, presenting open questions that can be investigated in the area of sustainable cooling in DCs regarding the data center’s cooling and the difficulty of incorporating dependability attributes in the environmental context. In addition to these results, the present study actively contributes to the concept of a “green data center” for the companies, which ranges from the choice of renewable energy sources to more efficient information technology equipment. Hence, we show the relevance and originality of this SLR and its results.

  相似文献   

15.
Along with vast non-fossil potential and significant expertise, there is a question of whether Asian nations are attaining efficient consumption and exploitation of renewable resources. From this perspective, the paper aims to evaluate the efficiency of 14 potential Asia countries in renewable energy consumption during the six-year periods (2014–2019). In analyzing the performance of the renewable energy sector, the data envelopment analysis (DEA) with an undesirable output model approach has been widely utilized to measure the efficiency of peer units compared with the best practice frontier. We consider four inputs and two outputs to a DEA-based efficiency model. Labor force, total energy consumption, share of renewable energy, and total renewable energy capacity are inputs. The outputs consist of CO2 emissions as an undesirable output and gross domestic product as a desirable output. The results show that United Arab Emirates, Saudi Arabia, Japan, and South Korea consistently outperform in the evaluation, achieving perfect efficiency scores during the research period. Uzbekistan is found to have the lowest average efficiency of renewable energy utilization.  相似文献   

16.
Cloud computing aims to provide dynamic leasing of server capabilities as scalable virtualized services to end users. However, data centers hosting cloud applications consume vast amounts of electrical energy, thereby contributing to high operational costs and carbon footprints. Green cloud computing solutions that can not only minimize the operational costs but also reduce the environmental impact are necessary. This study focuses on the Infrastructure as a Service model, where custom virtual machines (VMs) are launched in appropriate servers available in a data center. A complete data center resource management scheme is presented in this paper. The scheme can not only ensure user quality of service (through service level agreements) but can also achieve maximum energy saving and green computing goals. Considering that the data center host is usually tens of thousands in size and that using an exact algorithm to solve the resource allocation problem is difficult, the modified shuffled frog leaping algorithm and improved extremal optimization are employed in this study to solve the dynamic allocation problem of VMs. Experimental results demonstrate that the proposed resource management scheme exhibits excellent performance in green cloud computing.  相似文献   

17.
Internet Data Center (IDC) is one of important emerging cyber-physical systems. To guarantee the quality of service for their worldwide users, large Internet service providers usually operate multiple geographically distributed IDCs. The enormous power consumption of these data centers may lead to both huge electricity bills and considerable carbon emissions. To mitigate these problems, on-site renewable energy plants are emerging in recent years. Since the renewable energy is intermittent, greening geographical load balancing (GGLB for short) has been proposed to reduce both the electricity bills and carbon emissions by following the renewables. However, GGLB is not able to well deal with the wildly fluctuating wind power when applied into IDCs with on-site wind energy plants. It may either fail to minimize the total electricity bills or incur the costly frequent on–off switching of servers. In order to minimize the total electricity bills of geographically distributed IDCs with on-site wind energy plants, we formulate the total electricity bills minimization problem and propose a novel two-time-scale load balancing framework TLB to solve it. First, TLB models the runtime cooling efficiency for each IDC. Then it predicts the future fine-grained (e.g., 10-min) on-site wind power output at the beginning of each scheduling period (e.g., an hour). After that, TLB transforms the primal optimization problem into a typical mixed-integer linear programming problem and solves it to finally obtain the optimal scheduling policy including the open server number as well as the request routing policy. It is worth noting that the open server number of each IDC will remain the same during each scheduling period. As an application instance of TLB, we also design a two-time-scale load balancing algorithm TLB-ARMA for our experimental scenario. Evaluation results based on real-life traces show that TLB-ARMA is able to reduce the total electricity bills by as much as 12.58 % compared with the hourly executed GGLB without incurring the costly repeated on–off switching of servers.  相似文献   

18.
Cloud data centers consume high volume of energy for processing and switching the servers among different modes. Virtual Machine (VM) migration enhances the performance of cloud servers in terms of energy efficiency, internal failures and availability. On the other end, energy utilization can be minimized by decreasing the number of active, underutilized sources which conversely reduces the dependability of the system. In VM migration process, the VMs are migrated from underutilized physical resources to other resources to minimize energy utilization and optimize the operations. In this view, the current study develops an Improved Metaheuristic Based Failure Prediction with Virtual Machine Migration Optimization (IMFP-VMMO) model in cloud environment. The major intention of the proposed IMFP-VMMO model is to reduce energy utilization with maximum performance in terms of failure prediction. To accomplish this, IMFP-VMMO model employs Gradient Boosting Decision Tree (GBDT) classification model at initial stage for effectual prediction of VM failures. At the same time, VMs are optimally migrated using Quasi-Oppositional Artificial Fish Swarm Algorithm (QO-AFSA) which in turn reduces the energy consumption. The performance of the proposed IMFP-VMMO technique was validated and the results established the enhanced performance of the proposed model. The comparative study outcomes confirmed the better performance of the proposed IMFP-VMMO model over recent approaches.  相似文献   

19.
面向云计算数据中心的能耗建模方法   总被引:1,自引:0,他引:1  
罗亮  吴文峻  张飞 《软件学报》2014,25(7):1371-1387
云计算对计算能力的需求,促进了大规模数据中心的飞速发展.与此同时,云计算数据中心产生了巨大的能耗.由于云计算的弹性服务和可扩展性等特性,云计算数据中心的硬件规模近年来极度膨胀,这使得过去分散的能耗问题变成了集中的能耗问题.因此,深入研究云计算数据中心的节能问题具有重要意义.为此,针对云计算数据中心的能耗问题,提出了一种精确度高的能耗模型来预测云计算数据中心单台服务器的能耗状况.精确的能量模型是很多能耗感知资源调度方法的研究基础,在大多数现有的云计算能耗研究中,多采用线性模型来描述能耗和资源利用率之间的关系.然而随着云计算数据中心服务器体系结构的变化,能耗和资源使用率的关系已经难以用简单的线性函数来描述.因此,从处理器性能计数器和系统使用情况入手,结合多元线性回归和非线性回归的数学方法,分析总结了不同参数和方法对服务器能耗建模的影响,提出了适合云计算数据中心基础架构的服务器能耗模型.实验结果表明,该能耗模型在只监控系统使用率的情况下,在系统稳定后,能耗预测精度可达到95%以上.  相似文献   

20.
The pervasive availability of increasingly powerful mobile computing devices like PDAs, smartphones and wearable sensors, is widening their use in complex applications such as collaborative analysis, information sharing, and data mining in a mobile context. Energy characterization plays a critical role in determining the requirements of data-intensive applications that can be efficiently executed over mobile devices. This paper presents an experimental study of the energy consumption behavior of representative data mining algorithms running on mobile devices. Our study reveals that, although data mining algorithms are compute- and memory-intensive, by appropriate tuning of a few parameters associated to data (e.g., data set size, number of attributes, size of produced results) those algorithms can be efficiently executed on mobile devices by saving energy and, thus, prolonging devices lifetime. Based on the outcome of this study we also proposed a machine learning approach to predict energy consumption of mobile data-intensive algorithms. Results show that a considerable accuracy is achieved when the predictor is trained with specific-algorithm features.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号