首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
左成  虞红芳 《计算机应用》2016,36(11):2998-3005
针对已有虚拟数据中心(VDC)管理平台具有代码固化、后续升级困难等缺陷,设计和实现一种基于软件定义网络(SDN)的VDC管理平台。该平台由VDC管理子系统(VDCM)、VDC计算资源控制子系统(VDCCRC)和VDC网络资源控制子系统(VDCNRC)组成,子系统之间通过RESTful API交互建立起松耦合架构。VDCNRC通过SDN控制器管理数据中心网络资源,VDCCRC通过开源云平台管理数据中心计算资源,VDC管理子系统中内置VDC管理算法框架,可快速开发适用于实际生产环境的VDC管理算法。使用Mininet、Openstack、Floodlight搭建了测试环境,验证了该平台可通过Openstack来控制虚拟机的启动、迁移和删除,可通过Openflow控制器实现VDC网络带宽资源隔离,并支持VDC创建、删除和修改等操作。  相似文献   

2.
Fog computing provides quality of service for cloud infrastructure. As the data computation intensifies, edge computing becomes difficult. Therefore, mobile fog computing is used for reducing traffic and the time for data computation in the network. In previous studies, software-defined networking (SDN) and network functions virtualization (NFV) were used separately in edge computing. Current industrial and academic research is tackling to integrate SDN and NFV in different environments to address the challenges in performance, reliability, and scalability. SDN/NFV is still in development. The traditional Internet of things (IoT) data analysis system is only based on a linear and time-variant system that needs an IoT data system with a high-precision model. This paper proposes a combined architecture of SDN and NFV on an edge node server for IoT devices to reduce the computational complexity in cloud-based fog computing. SDN provides a generalization structure of the forwarding plane, which is separated from the control plane. Meanwhile, NFV concentrates on virtualization by combining the forwarding model with virtual network functions (VNFs) as a single or chain of VNFs, which leads to interoperability and consistency. The orchestrator layer in the proposed software-defined NFV is responsible for handling real-time tasks by using an edge node server through the SDN controller via four actions: task creation, modification, operation, and completion. Our proposed architecture is simulated on the EstiNet simulator, and total time delay, reliability, and satisfaction are used as evaluation parameters. The simulation results are compared with the results of existing architectures, such as software-defined unified virtual monitoring function and ASTP, to analyze the performance of the proposed architecture. The analysis results indicate that our proposed architecture achieves better performance in terms of total time delay (1800 s for 200 IoT devices), reliability (90%), and satisfaction (90%).  相似文献   

3.
Cloud computing aims to provide dynamic leasing of server capabilities as scalable virtualized services to end users. However, data centers hosting cloud applications consume vast amounts of electrical energy, thereby contributing to high operational costs and carbon footprints. Green cloud computing solutions that can not only minimize the operational costs but also reduce the environmental impact are necessary. This study focuses on the Infrastructure as a Service model, where custom virtual machines (VMs) are launched in appropriate servers available in a data center. A complete data center resource management scheme is presented in this paper. The scheme can not only ensure user quality of service (through service level agreements) but can also achieve maximum energy saving and green computing goals. Considering that the data center host is usually tens of thousands in size and that using an exact algorithm to solve the resource allocation problem is difficult, the modified shuffled frog leaping algorithm and improved extremal optimization are employed in this study to solve the dynamic allocation problem of VMs. Experimental results demonstrate that the proposed resource management scheme exhibits excellent performance in green cloud computing.  相似文献   

4.
现有网络设备支持的协议体系庞大, 导致高度复杂, 不仅限制了IP网络的技术发展, 更无法满足当前云计算、大数据和服务器虚拟化等应用趋势。软件定义网络(SDN)作为一种最新网络架构, 对网络设备控制面、转发面和应用层功能进行重新定义抽象, 使得网络设备软件可编程, 有望改变上述局面。介绍了OpenFlow技术的产生背景、特点及发展现状, 分析了基于OpenFlow的SDN体系结构和平台设计的关键技术, 并探究了SDN技术在网络管理自动化、光网络传输与IP承载的统一控制、无线网络的平滑切换、网络虚拟化和QoS保证等方向的应用。  相似文献   

5.
针对云计算、大数据等应用对异构资源管理和聚合的需求,提出了一种融合架构云服务器体系结构及其关键支撑技术。融合架构云服务器利用硬件资源池化技术,实现计算、存储、网络、供电、制冷和管理模块的解耦与融合重构,具有高密度、低功耗、易扩展、易管理,易维护特点,兼具横向扩展和纵向扩展优势,可优化系统部署、运维和能耗成本,显著降低总体拥有成本(TCO)。在金融、电信和互联网行业的实际应用案例表明,融合架构云服务器功耗降低超过15%,总体拥有成本降低近15%,为云计算、大数据等应用提供了更具性能功耗比优势的IT基础设施设计方案。  相似文献   

6.
虚拟化云计算平台的能耗管理   总被引:15,自引:0,他引:15  
数据中心的高能耗是一个亟待解决的问题.近年来,虚拟化技术和云计算模式快速发展起来,因其具有资源利用率高、管理灵活、可扩展性好等优点,未来的数据中心将广泛采用虚拟化技术和云计算技术.将传统的能耗管理技术与虚拟化技术相结合,为云计算数据中心的能耗管理问题提供了新的解决思路,是一个重要的研究方向.文中从能耗测量、能耗建模、能耗管理实现机制、能耗管理优化算法4个方面对虚拟化云计算平台能耗管理的最新研究成果进行了介绍.论文分析了虚拟化云计算平台面临的操作管理和能耗管理两方面的问题,指出了虚拟化云计算平台能耗监控与测量的难点;介绍了能耗监测步骤及能耗轮廓分析方法;提出了虚拟机系统的整体能耗模型及服务器整合和在线迁移两种关键技术本身的能耗模型;从虚拟化层和云平台层两个层次总结了目前能耗管理机制方面取得的进展;并对能耗管理算法进行分类、比较.最后对全文进行总结,提出了未来十个值得进一步研究的方向.  相似文献   

7.
摘要:云计算数据中心越来越庞大,硬件规模也日益增大,而且还会有大量的计算资源、存储资源会出现在云端,促使出现了一大批十万级、百万级、乃至千万级服务器的数据中心,且服务器还可以增量扩展与增量部署,高能耗问题已经日益凸显,严重制约到云计算数据中心的可持续性发展。本文提出了一种新型的云计算数据中心可扩展服务器节能优化策略——效能优化策略,能够基于全局角度来降低能源消耗,优化服务器选择过程,并且还可促使不同服务器之间实现负载均衡。仿真实验结果表明:基于能耗大小来看,本文提出的效能优化策略要比DVFS策略、无迁移策略所对应的能耗分别节约15.23%、24.33%;基于迁移数来看,本文提出的效能优化策略要比DVFS策略所对应的迁移次数减少2425次,总之,本文提出的效能优化策略总体而言要明显比DVFS策略、无迁移策略更优越。  相似文献   

8.
As cloud computing has become a popular computing paradigm, many companies have begun to build increasing numbers of energy hungry data centers for hosting cloud computing applications. Thus, energy consumption is increasingly becoming a critical issue in cloud data centers. In this paper, we propose a dynamic resource management scheme which takes advantage of both dynamic voltage/frequency scaling and server consolidation to achieve energy efficiency and desired service level agreements in cloud data centers. The novelty of the proposed scheme is to integrate timing analysis, queuing theory, integer programming, and control theory techniques. Our experimental results indicate that, compared to a statically provisioned data center that runs at the maximum processor speed without utilizing the sleep state, the proposed resource management scheme can achieve up to 50.3% energy savings while satisfying response-time-based service level agreements with rapidly changing dynamic workloads.  相似文献   

9.
蔡昕 《电脑开发与应用》2014,(3):233-234,78
IaaS(Infrastructure as a Service)基础设施即服务是进行云服务的基础工作部分,通过这种基础服务设施可以对多种数据资源进行资源池优化,真正做到最低成本同时又能够满足各种外部资源服务。首先对IaaS云计算的相关概念进行了阐述,并与传统VPS业务进行了对比,在此基础上对基于IaaS云计算的高效服务器管理相关问题进行了探讨。  相似文献   

10.
Balance of power: dynamic thermal management for Internet data centers   总被引:1,自引:0,他引:1  
Internet-based applications and their resulting multitier distributed architectures have changed the focus of design for large-scale Internet computing. Internet server applications execute in a horizontally scalable topology across hundreds or thousands of commodity servers in Internet data centers. Increasing scale and power density significantly impacts the data center's thermal properties. Effective thermal management is essential to the robustness of mission-critical applications. Internet service architectures can address multisystem resource management as well as thermal management within data centers.  相似文献   

11.
一个建立在JXTA平台上的对等网络游戏框架的设计   总被引:2,自引:0,他引:2  
叶绿 《计算机工程与应用》2004,40(17):144-147,151
论文针对网络中各种设备与服务管理的复杂性,提出一个采用对等网络计算,使得游戏系统不需要中心服务器的支持,或是中心服务器只需要很少的计算量的可行的办法,设计了一个建立在对等网络计算平台———JXTA平台之上的游戏框架,使各个设备之间能实现相互通信,能为游戏平台系统提供游戏所需要的各种JXTA服务层的服务,以及为不同类型的游戏提供一个统一的运行环境。  相似文献   

12.
The growth in computer and networking technologies over the past decades established cloud computing as a new paradigm in information technology. The cloud computing promises to deliver cost‐effective services by running workloads in a large scale data center consisting of thousands of virtualized servers. The main challenge with a cloud platform is its unpredictable performance. A possible solution to this challenge could be load balancing mechanism that aims to distribute the workload across the servers of the data center effectively. In this paper, we present a distributed and scalable load balancing mechanism for cloud computing using game theory. The mechanism is self‐organized and depends only on the local information for the load balancing. We proved that our mechanism converges and its inefficiency is bounded. Simulation results show that the generated placement of workload on servers provides an efficient, scalable, and reliable load balancing scheme for the cloud data center. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
The vast majority of Web services and sites are hosted in various kinds of cloud services, and ordering some level of quality of service (QoS) in such systems requires effective load-balancing policies that choose among multiple clouds. Recently, software-defined networking (SDN) is one of the most promising solutions for load balancing in cloud data center. SDN is characterized by its two distinguished features, including decoupling the control plane from the data plane and providing programmability for network application development. By using these technologies, SDN and cloud computing can improve cloud reliability, manageability, scalability and controllability. SDN-based cloud is a new type cloud in which SDN technology is used to acquire control on network infrastructure and to provide networking-as-a-service (NaaS) in cloud computing environments. In this paper, we introduce an SDN-enhanced Inter cloud Manager (S-ICM) that allocates network flows in the cloud environment. S-ICM consists of two main parts, monitoring and decision making. For monitoring, S-ICM uses SDN control message that observes and collects data, and decision-making is based on the measured network delay of packets. Measurements are used to compare S-ICM with a round robin (RR) allocation of jobs between clouds which spreads the workload equitably, and with a honeybee foraging algorithm (HFA). We see that S-ICM is better at avoiding system saturation than HFA and RR under heavy load formula using RR job scheduler. Measurements are also used to evaluate whether a simple queueing formula can be used to predict system performance for several clouds being operated under an RR scheduling policy, and show the validity of the theoretical approximation.  相似文献   

14.
We consider the problem of power and performance management for a multicore server processor in a cloud computing environment by optimal server configuration for a specific application environment. The motivation of the study is that such optimal virtual server configuration is important for dynamic resource provision in a cloud computing environment to optimize the power and performance tradeoff for certain specific type of applications. Our strategy is to treat a multicore server processor as an M/M/m queueing system with multiple servers. The system performance measures are the average task response time and the average power consumption. Two core speed and power consumption models are considered, namely, the idle-speed model and the constant-speed model. Our investigation includes justification of centralized management of computing resources, server speed constrained optimization, power constrained performance optimization, and performance constrained power optimization. Our main results are (1) cores should be managed in a centralized way to provide the highest performance without consumption of more energy in cloud computing; (2) for a given server speed constraint, fewer high-speed cores perform better than more low-speed cores; furthermore, there is an optimal selection of server size and core speed which can be obtained analytically, such that a multicore server processor consumes the minimum power; (3) for a given power consumption constraint, there is an optimal selection of server size and core speed which can be obtained numerically, such that the best performance can be achieved, i.e., the average task response time is minimized; (4) for a given task response time constraint, there is an optimal selection of server size and core speed which can be obtained numerically, such that minimum power consumption can be achieved while the given performance guarantee is maintained.  相似文献   

15.
云数据中心包含大量计算机,运作成本很高。有效整合资源、提高资源利用率、节约能源、降低运行成本是云数据中心关注的热点。云数据中心通过虚拟化技术将计算资源、存储资源和网络资源构建成动态的虚拟资源池;使用虚拟资源管理技术实现云计算资源自动部署、动态扩展、按需分配;用户采用按需和即付即用的方式获取资源。因此,数据中心对提高资源利用率的迫切需求,促使人们寻求新的方式以建设下一代数据中心。  相似文献   

16.
阳鑫磊  何倩  曹礼  王士成 《计算机科学》2017,44(11):268-272, 283
遥感数据日益增长,大规模遥感数据分发对集中分发服务器构成了巨大压力。充分利用参与下载节点的网络资源,提出并实现了一种支持访问控制的P2P大规模遥感数据分发系统。遥感数据分发系统分为遥感数据管理平台和遥感数据客户端两部分,遥感数据管理平台包含共享分发平台网站、云存储、种子资源服务器和跟踪服务器4个组件,遥感数据各客户端和种子资源服务器构成P2P网络。设计了包括共享分片、分片选择、跟踪器通信等的P2P协议,实现的遥感数据分发系统能够上传遥感数据并自动做种,支持对用户的访问控制。根据用户权限进行下载,各下载节点共享分片,然后基于类Bittorrent协议来加速遥感数据的分发。实验结果表明,实现的大规模遥感数据分发系统的功能完善,在多节点下载时具备良好的并发性能,能够满足大规模遥感数据分发的需要。  相似文献   

17.
为解决安徽省水利信息化建设现有数据资源分散、标准不一致、共享困难、开发利用效率不高等问题,基于前期建设已有成果,通过数据资源整合、数据资源管理系统开发、应用服务支撑建设、大数据分析试点应用、云计算平台建设、技术规范和管理制度编制等一系列水利大数据中心建设内容,厘清水利对象关系,实现数据汇集交换和有序共享,提供应用系统基...  相似文献   

18.
Research into ambient assisted living (AAL) strives to ease the daily lives of people with disabilities or chronic medical conditions. AAL systems typically consist of multitudes of sensors and embedded devices, generating large amounts of medical and ambient data. However, these biomedical sensors lack the processing power to perform key monitoring and data-aggregation tasks, necessitating data transmission and computation at central locations. The focus here is on the development of a scalable and context-aware framework and easing the flow between data collection and data processing. The resource-constrained nature of typical wearable body sensors is factored into our proposed model, with cloud computing features utilized to provide a real-time assisted-living service. With the myriad of distributed AAL systems at play, each with unique requirements and eccentricities, the challenge lies in the need to service these disparate systems with a middleware layer that is both coherent and flexible. There is significant complexity in the management of sensor data and the derivation of contextual information, as well as in the monitoring of user activities and in locating appropriate situational services. The proposed CoCaMAAL model seeks to address such issues and implement a service-oriented architecture (SOA) for unified context generation. This is done by efficiently aggregating raw sensor data and the timely selection of appropriate services using a context management system (CMS). With a unified model that includes patients, devices, and computational servers in a single virtual community, AAL services are enhanced. We have prototyped the proposed model and implemented some case studies to demonstrate its effectiveness.  相似文献   

19.
Next generation cloud systems will require a paradigm shift in how they are constructed and managed. Conventional control and management platforms are facing considerable challenges regarding flexibility, dependability and security that next generation systems must handle. The cloud computing technology has already contributed in alleviating a number of the problems associated with resource allocation, utilization and management. However, many of the elements of a well-designed cloud environment remain “stiff” and hard to modify and adapt in an integrated fashion. This includes the underlying networking topologies, many aspects of the user control over IaaS, PaaS or SaaS layers, construction of XaaS services, provenance and meta-data collection, to mention but few. In many situations the problem may be due to inadequacy of service abstraction. Software Defined Systems (SDSys) is a concept that help abstract the actual hardware at different layers with software components; one classical example of this abstractions are hypervisors. Such abstraction provides an opportunity for system administrators to construct and manage their systems, more easily, through flexible software layers. SDSys is an umbrella for different software defined subsystems including Software Defined Networking (SDN), Software Defined Storage (SDStorage), Software Defined Servers (Virtualization), Software Defined Data Centers (SDDC), Software Defined Security (SDSec) etc. and ultimately Software Defined Clouds (SDCloud). Individual solutions and seamless integration of these different abstractions remains in many respects a challenge. In this paper, the authors introduce Software Defined Cloud (SDCloud), a novel software defined cloud management framework that integrates different software defined cloud components to handle complexities associated with cloud computing systems. The first part of paper presents, for the first time, an extensive state of the art critical review of different components of software defined systems, constructing the proposed SDCloud. The second part of the paper proposes the novel concept of SDCloud, which is implemented and evaluated for its feasibility, flexibility and potential superiority.  相似文献   

20.
云计算资源调度研究综述   总被引:27,自引:5,他引:22  
资源调度是云计算的一个主要研究方向.首先对云计算资源调度的相关研究现状进行深入调查和分析;然后重点讨论以降低云计算数据中心能耗为目标的资源调度方法、以提高系统资源利用率为目标的资源管理方法、基于经济学的云资源管理模型,给出最小能耗的云计算资源调度模型和最小服务器数量的云计算资源调度模型,并深入分析和比较现有的云资源调度方法;最后指出云计算资源管理的未来重要研究方向:基于预测的资源调度、能耗与性能折衷的调度、面向不同应用负载的资源管理策略与机制、面向计算能力(CPU、内存)和网络带宽的综合资源分配、多目标优化的资源调度,以便为云计算研究提供有益的参考.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号