首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
当前,云数据中心的能耗问题已成为业界关注的热点.已有研究工作大多致力于从技术角度降低数据中心的能耗,或在能耗与性能之间寻求一种最佳的折衷.云计算作为一种商业计算模式,已有研究成果很少考虑到云定价策略对能耗管理机制的影响.文中提出了基于动态定价策略的数据中心能耗成本优化方案.建立起服务价格和能耗成本的统一模型,通过研究两者之间的关系.协同优化服务价格与能耗成本,使数据中心的收益达到最优.鉴于数据中心规模庞大、承载任务繁重等特点,论文采用基于重载近似的大规模排队系统来对数据中心建模,根据不同数据中心间的服务需求量和电价差别,设计了多数据中心间的负载路由机制,旨在削减数据中心的整体能耗成本.针对单个数据中心,定义了双阈值策略以动态调节服务器的各种状态(On/Off/Idle等),从而使数据中心能耗成本得到进一步优化.实验结果表明,论文提出的解决方案能够在满足用户QoS需求的前提下,较好地优化数据中心能耗成本,同时使数据中心的收益达到最优.  相似文献   

2.
随着云计算技术越来越成熟,应用范围越来越广泛.桌面云是云计算中的一个典型应用,通过部署数据中心,配备瘦终端的方式进行部署,改变了传统的办公环境的部署方式,提高了管理效率和成本效益.简要阐述了针对云计算中的桌面应用,设计了一套管理系统,实现对用户桌面的集中管理,以期提高工作效率,节约成本.  相似文献   

3.
面向云计算数据中心的能耗建模方法   总被引:1,自引:0,他引:1  
罗亮  吴文峻  张飞 《软件学报》2014,25(7):1371-1387
云计算对计算能力的需求,促进了大规模数据中心的飞速发展.与此同时,云计算数据中心产生了巨大的能耗.由于云计算的弹性服务和可扩展性等特性,云计算数据中心的硬件规模近年来极度膨胀,这使得过去分散的能耗问题变成了集中的能耗问题.因此,深入研究云计算数据中心的节能问题具有重要意义.为此,针对云计算数据中心的能耗问题,提出了一种精确度高的能耗模型来预测云计算数据中心单台服务器的能耗状况.精确的能量模型是很多能耗感知资源调度方法的研究基础,在大多数现有的云计算能耗研究中,多采用线性模型来描述能耗和资源利用率之间的关系.然而随着云计算数据中心服务器体系结构的变化,能耗和资源使用率的关系已经难以用简单的线性函数来描述.因此,从处理器性能计数器和系统使用情况入手,结合多元线性回归和非线性回归的数学方法,分析总结了不同参数和方法对服务器能耗建模的影响,提出了适合云计算数据中心基础架构的服务器能耗模型.实验结果表明,该能耗模型在只监控系统使用率的情况下,在系统稳定后,能耗预测精度可达到95%以上.  相似文献   

4.
虚拟化云计算平台的能耗管理   总被引:15,自引:0,他引:15  
数据中心的高能耗是一个亟待解决的问题.近年来,虚拟化技术和云计算模式快速发展起来,因其具有资源利用率高、管理灵活、可扩展性好等优点,未来的数据中心将广泛采用虚拟化技术和云计算技术.将传统的能耗管理技术与虚拟化技术相结合,为云计算数据中心的能耗管理问题提供了新的解决思路,是一个重要的研究方向.文中从能耗测量、能耗建模、能耗管理实现机制、能耗管理优化算法4个方面对虚拟化云计算平台能耗管理的最新研究成果进行了介绍.论文分析了虚拟化云计算平台面临的操作管理和能耗管理两方面的问题,指出了虚拟化云计算平台能耗监控与测量的难点;介绍了能耗监测步骤及能耗轮廓分析方法;提出了虚拟机系统的整体能耗模型及服务器整合和在线迁移两种关键技术本身的能耗模型;从虚拟化层和云平台层两个层次总结了目前能耗管理机制方面取得的进展;并对能耗管理算法进行分类、比较.最后对全文进行总结,提出了未来十个值得进一步研究的方向.  相似文献   

5.
云计算环境下的数据存储   总被引:3,自引:0,他引:3  
近年来,越来越多的人和企业开始关注云计算这种新的计算模式,高性能的云存储是实现云计算服务的基本条件.介绍了云计算与云存储,讨论了云计算环境下的数据存储体系结构.对其中的分布式文件系统的设计进行了详细的探讨.为企业创建自己的基于云计算的数据中心提供了一个具有可用性、可扩展性、可管理性、安全性的设计方案.最后对几种典型的商业化云存储平台进行了简单的分析并讨论了云计算的发展趋势,同时针对企业在云计算的发展中所处的角色不同,给出了不同的发展策略.  相似文献   

6.
近年来,维持数据中心正常运作的运维成本正在急剧地增长,对于中等规模数据中心的影响尤其明显。一种有效节省成本的办法是用云计算和云存储来取代传统数据中心的运维管理。中小型企业可以购买云计算能力和云存储服务,就像是支付水电费一样简单。本文将会对云存储继续详细介绍,并对云存储的安全性进行分析。  相似文献   

7.
1,云计算对网络带宽与能耗的影响 云计算数据中心发展 据美国Cisco公司预测,全球云通信量将以66%的年复合增长率发展,两年内,在云数据中心处理的工作量将首次超过在传统数据中心中的量,占比51%,到2015年预计将达到57%。  相似文献   

8.
校园网存在能耗高、效率低、浪费多等诸多问题,其可以通过云计算来解决,但云计算所带来的效益增长并没有一个很好的标准。对校园云平台进行了简单描述,对数据中心的成本、用户实例的成本进行了分析,提出了校园云平台成本效用函数。建立了校园云平台马尔可夫链模型来分析负载均衡策略和贪心策略。使用CloudSim对校园云平台进行仿真,对模型进行验证分析和比较。通过建立模型和仿真分析得知,建立云平台,采用不同的策略,可以有效提高校园网资源利用率以降低成本,或者提供更好的服务质量。  相似文献   

9.
我国的经济社会不断发展,科学技术水平不断提升.随着网络信息技术的不断扩展,数据信息资源的价值越来越突出.在大数据的时代中,一种面向云计算的数据中心应运而生,改变了传统数据中心的能源计算流程,提高了能耗计算的效率.本文将具体探讨一种面向云计算数据中心的能耗建模方法,希望能为相关人士提供一些参考.  相似文献   

10.
云计算作为IT行业的一个新兴的领域,短时间内就展示出了强大的活力。越来越多的公司建立了自己的云计算中心或者数据中心。随着数据和计算能力的集中化,云计算中的主机安全问题越来越被关注。本文先简单地介绍了云计算的概念和服务模式,然后根据云计算环境的特点,探讨云计算的主机安全问题和相关技术,最后探讨在云计算平台的环境下,实现主机安全的技术解决方案。  相似文献   

11.
Cloud computing services have recently become a ubiquitous service delivery model, covering a wide range of applications from personal file sharing to being an enterprise data warehouse. Building green data center networks providing cloud computing services is an emerging trend in the Information and Communication Technology (ICT) industry, because of Global Warming and the potential GHG emissions resulting from cloud services. As one of the first worldwide initiatives provisioning ICT services entirely based on renewable energy such as solar, wind and hydroelectricity across Canada and around the world, the GreenStar Network (GSN) was developed to dynamically transport user services to be processed in data centers built in proximity to green energy sources, reducing Greenhouse Gas (GHG) emissions of ICT equipments. Regarding the current approach, which focuses mainly in reducing energy consumption at the micro-level through energy efficiency improvements, the overall energy consumption will eventually increase due to the growing demand from new services and users, resulting in an increase in GHG emissions. Based on the cooperation between Mantychore FP7 and the GSN, our approach is, therefore, much broader and more appropriate because it focuses on GHG emission reductions at the macro-level. This article presents some outcomes of our implementation of such a network model, which spans multiple green nodes in Canada, Europe and the USA. The network provides cloud computing services based on dynamic provision of network slices through relocation of virtual data centers.  相似文献   

12.
Energy consumption in cloud data centers is increasing as the use of such services increases. It is necessary to propose new methods of decreasing energy consumption. Green cloud computing helps to reduce energy consumption and significantly decreases both operating costs and greenhouse gas emissions. Scheduling the enormous number of user-submitted workflow tasks is an important aspect of cloud computing. Resources in cloud data centers should compute these tasks using energy efficient techniques. This paper proposed a new energy-aware scheduling algorithm for time-constrained workflow tasks using the DVFS method in which the host reduces the operating frequency using different voltage levels. The goal of this research is to reduce energy consumption and SLA violations and improve resource utilization. The simulation results show that the proposed method performs more efficiently when evaluating metrics such as energy utilization, average execution time, average resource utilization and average SLA violation.  相似文献   

13.
Cloud computing is a form of distributed computing, which promises to deliver reliable services through next‐generation data centers that are built on virtualized compute and storage technologies. It is becoming truly ubiquitous and with cloud infrastructures becoming essential components for providing Internet services, there is an increase in energy‐hungry data centers deployed by cloud providers. As cloud providers often rely on large data centers to offer the resources required by the users, the energy consumed by cloud infrastructures has become a key environmental and economical concern. Much energy is wasted in these data centers because of under‐utilized resources hence contributing to global warming. To conserve energy, these under‐utilized resources need to be efficiently utilized and to achieve this, jobs need to be allocated to the cloud resources in such a way so that the resources are used efficiently and there is a gain in performance and energy efficiency. In this paper, a model for energy‐aware resource utilization technique has been proposed to efficiently manage cloud resources and enhance their utilization. It further helps in reducing the energy consumption of clouds by using server consolidation through virtualization without degrading the performance of users’ applications. An artificial bee colony based energy‐aware resource utilization technique corresponding to the model has been designed to allocate jobs to the resources in a cloud environment. The performance of the proposed algorithm has been evaluated with the existing algorithms through the CloudSim toolkit. The experimental results demonstrate that the proposed technique outperforms the existing techniques by minimizing energy consumption and execution time of applications submitted to the cloud. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
Energy efficiency has grown into a latest exploration area of virtualized cloud computing paradigm. The increase in the number and the size of the cloud data centers has propagated the need for energy efficiency. An extensively practiced technology in cloud computing is live virtual machine migration and is thus focused in this work to save energy. This paper proposes an energy-aware virtual machine migration technique for cloud computing, which is based on the Firefly algorithm. The proposed technique migrates the maximally loaded virtual machine to the least loaded active node while maintaining the performance and energy efficiency of the data centers. The efficacy of the proposed technique is exhibited by comparing it with other techniques using the CloudSim simulator. An enhancement in the average energy consumption of about 44.39 % has been attained by reducing an average of 72.34 % of migrations and saving 34.36 % of hosts, thereby, making the data center more energy-aware.  相似文献   

15.
ARC (Advanced Resource Connector) CE (Computing Element) 是 Nordugrid 开发的网格中间件中的对计算资源管理的组件。ARC CE 相对于其他的对等计算资源网格组件,具有轻量级,易管理,认证简单,易扩展的特性,因此越来越广泛地被应用于各类网格应用平台上。高能物理实验 ATLAS 具有数据量大,计算量大的特点,采用了网格计算的方式整合了位于全球上百个计算中心的 15 万个 CPU 核进行协同计算。随着 ATLAS Run2 阶段开始,其所需计算资源急速增长,因此 ATLAS 在积极探索使用云计算资源,超算资源 (HPC),志愿计算资源等。如何将这些异构的计算资源整合到 ATLAS 传统的网格计算平台,成为一个急需解决的重要问题。ARC CE 由于所具有的易扩展和轻认证的属性,于是成为这一解决方案的核心组件。本文将以志愿计算为例,阐述如何利用 ARC CE 将动态的、不可信的志愿计算资源平台整合到现有的 ATLAS 网格计算环境,实现对用户和网格服务的双透明性。  相似文献   

16.
Recently IT infrastructures change to cloud computing, the demand of cloud data center increased. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared computing resources that can be rapidly provisioned and released with minimal management effort, the interest on data centers to provide the cloud computing services is increasing economically and variably. This study analyzes the factors to improve the power efficiency while securing scalability of data centers and presents the considerations for cloud data center construction in terms of power distribution method, power density per rack and expansion unit separately. The result of this study may be used for making rational decisions concerning the power input, voltage transformation and unit of expansion when constructing a cloud data center or migrating an existing data center to a cloud data center.  相似文献   

17.
云计算数据中心的耗电量巨大,但绝大多数的云计算数据中心并没有取得较高的资源利用率,通常只有15%-20%,有相当数量的服务器处于闲置工作状态,导致大量的能耗白白浪费。为了能够有效降低云计算数据中心的能耗,提出了一种适用于异构集群系统的云计算数据中心虚拟机节能调度算法(PVMAP算法),仿真实验结果表明:与经典算法PABFD相比,PVMAP算法的能耗明显更低,可扩展性与稳定性都更好。与此同时,随着〈Hosts,VMs〉数目的不断增加,PVMAP 算法虚拟机迁移总数和关闭主机总数的增长幅度都要低于PABFD算法。  相似文献   

18.
The increasing requirements of big data analytics and complex scientific computing impose significant burdens on cloud data centers. As a result, not only the computation but also the communication expenses in data centers are greatly increased. Previous work on green computing in data centers mainly focused on the energy consumption of the servers rather than the communication. However, for those emerging applications with big data-flows transmission, more energy consumption could be consumed by communication links, switching and aggregation elements. To this end, based on data-flows’ transmission characteristics, we proposes a novel Job-Aware Virtual Machine Placement and Route Scheduling (JAVPRS) scheme to reduce the energy consumption of data center networks (DCN) while still meeting as many network QoS (Quality of Service) requirements as possible. Our proposed scheme focuses on not just migrating large data flows, but also integrating small data flows to improve the utilization rate of the communication links. With more idle switches turned off, DCN’s energy consumption will thus be reduced. Besides the data flows’ migration and integration, the Traffic Engineering (TE) technique is also applied to decrease the transmission delay and increase the network throughput. To evaluate the performance of our proposed scheme, a number of simulation studies are performed. Compared to the selected benchmarks, the simulation results showed that JAVPRS can achieve 22.28%–35.72% energy saving while reducing communication delay by 5.8%–6.8% and improving network throughput by 13.3%.  相似文献   

19.
As cloud computing has become a popular computing paradigm, many companies have begun to build increasing numbers of energy hungry data centers for hosting cloud computing applications. Thus, energy consumption is increasingly becoming a critical issue in cloud data centers. In this paper, we propose a dynamic resource management scheme which takes advantage of both dynamic voltage/frequency scaling and server consolidation to achieve energy efficiency and desired service level agreements in cloud data centers. The novelty of the proposed scheme is to integrate timing analysis, queuing theory, integer programming, and control theory techniques. Our experimental results indicate that, compared to a statically provisioned data center that runs at the maximum processor speed without utilizing the sleep state, the proposed resource management scheme can achieve up to 50.3% energy savings while satisfying response-time-based service level agreements with rapidly changing dynamic workloads.  相似文献   

20.

The demand for higher computing power increases and, as a result, also leads to an increased demand for services hosted in cloud computing environments. It is known, for example, that in 2018 more than 4 billion people made daily access to these services through the Internet, corresponding to more than half of the world’s population. To support such services, these clouds are made available by large data centers. These systems are responsible for the increasing consumption of electricity, given the increasing number of accesses, increasing the demand for greater communication capacity, processing and high availability. Since electricity is not always obtained from renewable resources, the relentless pursuit of cloud services can have a significant environmental impact. In this context, this paper proposes an integrated and dynamic strategy that demonstrates the impact of the availability of data center architecture equipment on energy consumption. For this, we used the technique of modeling colored Petri nets (CPN), responsible for quantifying the cost, environmental impact and availability of the electricity infrastructure of the data centers under analysis. Such proposed models are supported by the developed tool, where data center designers do not need to know CPN to compute the metrics of interest. A case study was proposed to show the applicability of the proposed strategy. Significant results were obtained, showing an increase in system availability of 100%, with equivalents operating cost and environmental impact.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号