首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 343 毫秒
1.
RepliStor是EMC公司LEGATO软件部采用获得美国专利保护的实时、异步、多点数据复制技术的跨广域网、局域网的海量数据迁移解决方案。 LEGATO RepliStor具有极好的易用性。它采用获得美国专利局保护的异步、实时数据复制技术实现将客户的关键业务数据,如数据库文件、文档系统、计费系统数据通过局域网或广域网迁移到另外一处或多处数据中心,从而实现数据级别的高可用性。该专利技术不受任  相似文献   

2.
虚拟化技术的兴起,打破了传统数据中心的架构,数据中心之间存在着大量互联场景,如异地双活、数据中心容灾、数据同步备份、数据迁移、集中式存储、共享虚拟化资源池、虚拟机漂移等,由此产生了大量的东西向流量,需要构建一个可靠、灵活、安全、易于部署的二、三层网络,以满足数据中心互联需求.研究了EVPN VxLAN组网架构,在业内尚无异构多厂商规模商用组网成功案例的情况下,结合中国电信股份有限公司上海分公司LSN数据中心现状,提出并实现了适用于现网的DCI组网.  相似文献   

3.
文章认为虚拟化技术的广泛应用使得数据中心的能耗管理复杂化,如虚拟机迁移和服务器整合时,迁移哪个虚拟机,何时迁移虚拟机,如何放置虚拟机,整合负载需要考虑哪些问题等。整理、分类分析当前虚拟化数据中心环境中的各种能耗,对于资源和虚拟机的全局管理、合理的放置虚拟机、动态的迁移虚拟机、整合服务器等具有重大意义。文章研究了数据中心节能机制,包括虚拟化数据中心中的全局调度管理、虚拟机初始放置和整合、虚拟机在线迁移策略,以及迁移整合时对网络通信量的考虑、迁移开销、虚拟机负载相关性等。摘要:  相似文献   

4.
随着云计算的发展进入主流,在云资源池中的虚拟机数量也越来越多,因此为满足负载平衡和节省能源的需求,怎样快速低成本地在不同主机之间迁移虚拟机成为了研究者研究的一个热点问题。设计了一个分布式镜像文件存储系统——IFSpool。基于Lustre的该存储系统能够完成3个目标:大容量、高扩展性地支持成千上万个虚拟机镜像,高I/O吞吐率的数据访问和针对虚拟机启动风暴的启动优化。IFSpool紧密地结合队列和缓存机制,并通过协调数据访问顺序提高I/O吞吐率和减轻启动风暴。IFSpool的I/O带宽和存储容量能容易地通过动态增加存储节点和服务器扩充。实验结果表明,IFSpool对于虚拟机迁移是非常有效的。  相似文献   

5.
林秀 《电信技术》2016,(3):60-63
在分析企业IT系统建设双活数据中心必要性的基础上,分析云虚拟机级别跨数据中心迁移对大二层网络的需求与实现,进而探讨业务级别的双活数据中心两种技术实现方案,网络对全局业务监控检查及可用性感知能力的要求,业务协同调度机制以及企业双活数据中心前端应用网络的设计要点.  相似文献   

6.
WSS 2008存储服务器,除了给工作站、服务器提供外部存储外,还可以为安装了VMware ESX Server、Hyper-V虚拟机化软件的主机,提供共享的外部存储,以实现虚拟机的“热”迁移等高级功能。下面,介绍在VMware ESX Server 4.0中,使用WSS 2008提供的iSCSI存储的方法。  相似文献   

7.
智能网联交通系统中车载用户的高速移动,不可避免地造成了数据在边缘服务器之间频繁迁移,产生了额外的通信回传时延,对边缘服务器的实时计算服务带来了巨大的挑战。为此,该文提出一种基于车辆运动轨迹的快速深度Q学习网络(DQN-TP)边云迁移策略,实现数据迁移的离线评估和在线决策。车载决策神经网络实时获取接入的边缘服务器网络状态和通信回传时延,根据车辆的运动轨迹进行虚拟机或任务迁移的决策,同时将实时的决策信息和获取的边缘服务器网络状态信息发送到云端的经验回放池中;评估神经网络在云端读取经验回放池中的相关信息进行网络参数的优化训练,定时更新车载决策神经网络的权值,实现在线决策的优化。最后仿真验证了所提算法与虚拟机迁移算法和任务迁移算法相比能有效地降低时延。  相似文献   

8.
《现代电子技术》2019,(20):128-132
虚拟机分配策略是提高云数据中心的物理主机利用率和降低能源消耗的关键技术。文中提出云数据中心面向低能源消耗的虚拟机分配策略LEC-VM。LEC-VM包括2个组成部分:虚拟机放置策略和虚拟机迁移优化策略。通过放置策略将云数据中心的虚拟机分配到最合适的物理节点之上,保证整个系统的CPU利用率低于一个给定的阈值;通过迁移优化策略,根据系统的当前状态动态迁移虚拟机,对物理主机的资源进行优化。利用CloudSim作为云数据中心的云端测试环境。测试结果表明,LEC-VM可以减少云数据中心的SLA违规,保证云计算的服务质量,与其他的虚拟机分配策略比较起来,可以降低能源消耗。  相似文献   

9.
《信息通信技术》2015,(1):29-33
随着云计算数据中心规模的扩大,虚拟化技术的成熟应用保证了计算能力、存储能力的自动伸缩,然而仍然采用传统网络技术,虚拟化程度严重不足,网络能力逐渐成为制约云计算灵活提供资源的瓶颈。SDN(Software Defined Network,软件定义网络)实现了控制与转发的分离,有效解决了网络资源的灵活配置问题。文章对现有SDN技术进行研究,重点分析基于SDN的数据中心网络技术,详细阐述SDN架构,并对控制器和转发设备进行分析说明,对理解基于SDN的数据中心网络具有借鉴意义,对SDN在数据中心落地提供了技术依据。其引入有效解决了数据中心的海量数据多路径转发、虚拟机智能迁移等网络问题,实现了对网络流量的智能规划。  相似文献   

10.
《现代电子技术》2017,(22):33-35
传统基于Hypervisor模型的云计算资源调度方法存在长时间得不到调度,调度性能低的问题。针对该问题,设计基于容器技术的云计算资源合理调度方法,设计了调度系统的架构以及调度流程。详细说明了虚拟机迁移时间判断流程以及被迁移虚拟机选择流程。采用Migrate方法完成虚拟机的迁移,资源统计过程通过调用Libvirt的接口实现通信,并通过近似的方式运算虚拟机CPU使用率,降低了云计算资源调度时的数据中心能耗。经过测试表明,所提方法稳定性高,总体性能优,达到了预期目标。  相似文献   

11.
Virtual machine (VM) migration enables flexible and efficient resource management in modern data centers. Although various VM migration algorithms have been proposed to improve the utilization of physical resources in data centers, they generally focus on how to select VMs to be migrated only according to their resource requirements and ignore the relationship between the VMs and servers with respect to their varying resource usage as well as the time at which the VMs should be migrated. This may dramatically degrade the algorithm performance and increase the operating and the capital cost when the resource requirements of the VMs change dynamically over time. In this paper, we propose an integrated VM migration strategy to jointly consider and address these issues. First, we establish a service level agreement-based soft migration mechanism to significantly reduce the number of VM migrations. Then, we develop two algorithms to solve the VM and server selection issues, in which the correlation between the VMs and the servers is used to identify the appropriate VMs to be migrated and the destination servers for them. The experimental results obtained from extensive simulations show the effectiveness of the proposed algorithms compared to traditional schemes in terms of the rate of resource usage, the operating cost and the capital cost.  相似文献   

12.
With the increasing popularity of cloud computing services, the more number of cloud data centers are constructed over the globe. This makes the power consumption of cloud data center elements as a big challenge. Hereby, several software and hardware approaches have been proposed to handle this issue. However, this problem has not been optimally solved yet. In this paper, we propose an online cloud resource management with live migration of virtual machines (VMs) to reduce power consumption. To do so, a prediction‐based and power‐aware virtual machine allocation algorithm is proposed. Also, we present a three‐tier framework for energy‐efficient resource management in cloud data centers. Experimental results indicate that the proposed solution reduces the power consumption; at the same time, service‐level agreement violation (SLAV) is also improved.  相似文献   

13.
Burst is a common pattern in the user's requirements, which suddenly increases the workload of virtual machines (VMs) and reduces the performance and energy efficiency of cloud computing systems (CCS). Virtualization technology with the ability to migrate VMs attempts to solve this problem. By migration, VMs can be dynamically consolidated to the users' requests. Burst temporarily increases the workload. Ignoring this issue will lead to incorrect decisions regarding the migration of VMs. It increases the number of migrations and Service Level Agreement Violations (SLAVs) due to overload. This may cause waste of resources, increase in energy consumption, and misplaced VMs. Therefore, a burst‐aware method for these issues is proposed in this paper. The method consists of two algorithms: one for determining the migration time and the other for the placement of VMs. We use the PlanetLab real dataset and CloudSim simulator to evaluate the performance of the proposed method. The results confirm the advantages of the method regarding performance compared to benchmark methods.  相似文献   

14.
As the number of Virtual Machines (VMs) consolidated on single physical server increases with the rapid advance of server hardware, virtual network turns complex and frangible. Modern Network Security Engines (NSE) are introduced to eradicate the intrusions occurring in the virtual network. In this paper, we point out the inadequacy of the present live migration implementation, which hinders itself from providing transparent VM relocation between hypervisors equipped with Network Security Engines (NSE H). This occurs because the current implementation ignores VM related Security Context (SC) required by NSEs embedded in NSE H. We present the CoM, a comprehensive live migration framework, for NSE H based virtualization computing environment. We built a prototype system on Xen hypervisors to evaluate our framework, and conduct experiments under various realistic application environments. The results demonstrate that our solution successfully fixes the inadequacy of the present live migration implementation, and the performance overhead is negligible.  相似文献   

15.
Cloud computing introduced a new paradigm in IT industry by providing on‐demand, elastic, ubiquitous computing resources for users. In a virtualized cloud data center, there are a large number of physical machines (PMs) hosting different types of virtual machines (VMs). Unfortunately, the cloud data centers do not fully utilize their computing resources and cause a considerable amount of energy waste that has a great operational cost and dramatic impact on the environment. Server consolidation is one of the techniques that provide efficient use of physical resources by reducing the number of active servers. Since VM placement plays an important role in server consolidation, one of the main challenges in cloud data centers is an efficient mapping of VMs to PMs. Multiobjective VM placement is generating considerable interest among researchers and academia. This paper aims to represent a detailed review of the recent state‐of‐the‐art multiobjective VM placement mechanisms using nature‐inspired metaheuristic algorithms in cloud environments. Also, it gives special attention to the parameters and approaches used for placing VMs into PMs. In the end, we will discuss and explore further works that can be done in this area of research.  相似文献   

16.
Fan  Weibei  Han  Zhijie  Li  Peng  Zhou  Jingya  Fan  Jianxi  Wang  Ruchuan 《Journal of Signal Processing Systems》2019,91(10):1077-1089

With the wide application of cloud computing, the scale of cloud data center network is growing. The virtual machine (VM) live migration technology is becoming more crucial in cloud data centers for the purpose of load balance, and efficient utilization of resources. The lightweight virtualization technique has made virtual machines more portable, efficient and easier to management. Different from virtual machines, containers bring more lightweight, more flexible and more intensive service capabilities to the cloud. Researches on container migration is still in its infancy, especially live migration is still very immature. In this paper, we present the locality live migration model where we take into account the distance, available bandwidth and costs between containers. Furthermore, we conduct comprehensive experiments on a cluster. Extensive simulation results show that the proposed method improves the utilization of resources of servers, and also improves the balance of all kinds of resources on the physical machine.

  相似文献   

17.

In cloud computing, more often times cloud assets are underutilized because of poor allocation of task in virtual machine (VM). There exist inconsistent factors affecting the scheduling tasks to VMs. In this paper, an effective scheduling with multi-objective VM selection in cloud data centers is proposed. The proposed multi-objective VM selection and optimized scheduling is described as follows. Initially the input tasks are gathered in a task queue and tasks computational time and trust parameters are measured in the task manager. Then the tasks are prioritized based on the computed measures. Finally, the tasks are scheduled to the VMs in host manager. Here, multi-objectives are considered for VM selection. The objectives such as power usage, load volume, and resource wastage are evaluated for the VMs and the entropy is calculated for the measured objectives and based on the entropy value krill herd optimization algorithm prioritized tasks are scheduled to the VMs. The experimental results prove that the proposed entropy based krill herd optimization scheduling outperforms the existing general krill herd optimization, cuckoo search optimization, cloud list scheduling, minimum completion cloud, cloud task partitioning scheduling and round robin techniques.

  相似文献   

18.
Cloud computing makes it possible for users to share computing power. The framework of multiple data centers gains a greater popularity in modern cloud computing. Due to the uncertainty of the requests from users, the loads of CPU(Center Processing Unit) of different data centers differ. High CPU utilization rate of a data center affects the service provided for users, while low CPU utilization rate of a data center causes high energy consumption. Therefore, it is important to balance the CPU resource across data centers in modern cloud computing framework. A virtual machine(VM)migration algorithm was proposed to balance the CPU resource across data centers. The simulation results suggest that the proposed algorithm has a good performance in the balance of CPU resource across data centers and reducing energy consumption.  相似文献   

19.
Network function virtualization (NFV) places network functions onto the virtual machines (VMs) of physical machines (PMs) located in data centers. In practice, a data flow may pass through multiple network functions, which collectively form a service chain across multiple VMs residing on the same or different PMs. Given a set of service chains, network operators have two options for placing them: (a) minimizing the number of VMs and PMs so as to reduce the server rental cost or (b) placing VMs running network functions belonging to the same service chain on the same or nearby PMs so as to reduce the network delay. In determining the optimal service chain placement, operators face the problem of minimizing the server cost while still satisfying the end‐to‐end delay constraint. The present study proposes an optimization model to solve this problem using a nonlinear programming (NLP) approach. The proposed model is used to explore various operational problems in the service chain placement field. The results suggest that the optimal cost ratio for PMs with high, hybrid, and low capacity, respectively, is equal to 4:2:1. Meanwhile, the maximum operating utilization rate should be limited to 55% in order to minimize the rental cost. Regarding quality of service (QoS) relaxation, the server cost reduces by 20%, 30%, and 32% as the end‐to‐end delay constraint is relaxed from 40 to 60, 80, and 100 ms, respectively. For the server location, the cost decreases by 25% when the high‐capacity PMs are decentralized rather than centralized. Finally, the cost reduces by 40% as the repetition rate in the service chain increases from 0 to 2. A heuristic algorithm, designated as common sub chain placement first (CPF), is proposed to solve the service chain placement problem for large‐scale problems (eg, 256 PMs). It is shown that the proposed algorithm reduces the solution time by up to 86% compared with the NLP optimization model, with an accuracy reduction of just 8%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号