首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
According to the important methodology of convex optimization theory, the energy-efficient and scalability problems of modern data centers are studied. Then a novel virtual machine (VM) placement scheme is proposed for solving these problems in large scale. Firstly, by referring the definition of VM placement fairness and utility function, the basic algorithm of VM placement which fulfills server constraints of physical machines is discussed. Then, we abstract the VM placement as an optimization problem which considers the inherent dependencies and traffic between VMs. By given the structural differences of recently proposed data center architectures, we further investigate a comparative analysis on the impact of the network architectures, server constraints and application dependencies on the potential performance gain of optimization-based VM placement. Comparing with the existing schemes, the performance improvements are illustrated from multiple perspectives, such as reducing the number of physical machines deployment, decreasing communication cost between VMs, improving energy-efficient and scalability of data centers.  相似文献   

2.
Virtualization, which acts as the underlying technology for cloud computing, enables large amounts of third-party applications to be packed into virtual machines (VMs). VM migration enables servers to be reconsolidated or reshuffled to reduce the operational costs of data centers. The network traffic costs for VM migration currently attract limited attention.However, traffic and bandwidth demands among VMs in a data center account for considerable total traffic. VM migration also causes additional data transfer overhead, which would also increase the network cost of the data center.This study considers a network-aware VM migration (NetVMM) problem in an overcommitted cloud and formulates it into a non-deterministic polynomial time-complete problem. This study aims to minimize network traffic costs by considering the inherent dependencies among VMs that comprise a multi-tier application and the underlying topology of physical machines and to ensure a good trade-off between network communication and VM migration costs.The mechanism that the swarm intelligence algorithm aims to find is an approximate optimal solution through repeated iterations to make it a good solution for the VM migration problem. In this study, genetic algorithm (GA) and artificial bee colony (ABC) are adopted and changed to suit the VM migration problem to minimize the network cost. Experimental results show that GA has low network costs when VM instances are small. However, when the problem size increases, ABC is advantageous to GA. The running time of ABC is also nearly half than that of GA. To the best of our knowledge, we are the first to use ABC to solve the NetVMM problem.  相似文献   

3.
The increased availability of text corpora and the growth of connectionism has stimulated a renewed interest in probabilistic models of language processing in computational linguistics and psycholinguistics. The Simple Recurrent Network (SRN) is an important connectionist model because it has the potential to learn temporal dependencies of unspecified length. In addition, many computational questions about the SRN's ability to learn dependencies between individual items extend to other models. This paper will report on experiments with an SRN trained on a large corpus and examine the ability of the network to learn bigrams, trigrams, etc., as a function of the size of the corpus. The performance is evaluated by an information theoretic measure of prediction (or guess) ranking and output vector entropy. With enough training and hidden units the SRN shows the ability to learn 5 and 6-gram dependencies, although learning an n-gram is contingent on its frequency and the relative frequency of other n-grams. In some cases, the network will learn relatively low frequency deep dependencies before relatively high frequency short ones if the deep dependencies do not require representational shifts in hidden unit space.  相似文献   

4.
Seamless live migration of virtual machines over the MAN/WAN   总被引:2,自引:0,他引:2  
Franco  Paul  Leon  Chetan  Cees  Joe  Inder  Bas  Satish  Phil   《Future Generation Computer Systems》2006,22(8):901-907
The “VM Turntable” demonstrator at iGRID 2005 pioneered the integration of Virtual Machines (VMs) with deterministic “lightpath” network services across a MAN/WAN. The results provide for a new stage of virtualization—one for which computation is no longer localized within a data center but rather can be migrated across geographical distances, with negligible downtime, transparently to running applications and external clients. A noteworthy data point indicates that a live VM was migrated between Amsterdam, NL and San Diego, USA with just 1–2 s of application downtime. When compared to intra-LAN local migrations, downtime is only about 5–10 times greater despite 1000 times higher round-trip times.  相似文献   

5.
Cloud computing has recently emerged as a leading paradigm to allow customers to run their applications in virtualized large-scale data centers. Existing solutions for monitoring and management of these infrastructures consider virtual machines (VMs) as independent entities with their own characteristics. However, these approaches suffer from scalability issues due to the increasing number of VMs in modern cloud data centers. We claim that scalability issues can bc addressed by leveraging the similarity among VMs behavior in terms of resource usage patterns. In this paper we propose an automated methodology to cluster VMs starting from the usage of multiple resources, assuming no knowledge of the services executed on them. The innovative contribution of the proposed methodology is the use of the statistical technique known as principal component analysis (PCA) to automatically select the most relevant information to cluster similar VMs. We apply the methodology to two case studies, a virtualized testbed and a real enterprise data center. In both case studies, the automatic data selection based on PCA allows us to achieve high performance, with a percentage of correctly clustered VMs between 80% and 100% even for short time series (1 day) of monitored data. Furthermore, we estimate the potential reduction in the amount of collected data to demonstrate how our proposal may address the scalability issues related to monitoring and management in cloud computing data centers.  相似文献   

6.
Cloud computing aims to provide dynamic leasing of server capabilities as scalable virtualized services to end users. However, data centers hosting cloud applications consume vast amounts of electrical energy, thereby contributing to high operational costs and carbon footprints. Green cloud computing solutions that can not only minimize the operational costs but also reduce the environmental impact are necessary. This study focuses on the Infrastructure as a Service model, where custom virtual machines (VMs) are launched in appropriate servers available in a data center. A complete data center resource management scheme is presented in this paper. The scheme can not only ensure user quality of service (through service level agreements) but can also achieve maximum energy saving and green computing goals. Considering that the data center host is usually tens of thousands in size and that using an exact algorithm to solve the resource allocation problem is difficult, the modified shuffled frog leaping algorithm and improved extremal optimization are employed in this study to solve the dynamic allocation problem of VMs. Experimental results demonstrate that the proposed resource management scheme exhibits excellent performance in green cloud computing.  相似文献   

7.
一种多点容灾系统的设计与实现*   总被引:2,自引:0,他引:2  
设计并实现了一种多点容灾系统。主数据中心的数据通过本地高速网络同步地在本地备份中心备份,并通过Internet异步地在多个远程备份中心备份。主数据中心的写请求同步地在本地备份中心提交,并经过缓存、差错检测后异步地在多个远程备份中心重放。灾难发生时,可从多个备份点对主数据中心进行恢复。该系统具有可靠性高、费用低廉、备份距离远、容灾能力强等优点。  相似文献   

8.
在现代基于虚拟化的数据中心中,虚拟机分配是实现云中资源有效调度的首要考虑。已经证明对数据结点分配虚拟机并考虑虚拟机之间的通信延迟,使得最大通信延迟最小的问题是NP-hard问题。目前鲜有在数据中心网络虚拟机分配问题上考虑其安全性和可靠性的研究。针对虚拟机分配中的容错技术,提出了一种具有可控虚拟机冗余度的启发式分配算法。算法以最大通信延迟最小化为优化目标,在可利用的虚拟机集合中通过构造可控冗余度的团来分配处理数据结点。实验结果表明,在Tree、VL2、Fat-tree和BCube四种常用的网络结构中,提出的启发式算法能提供0-200%之间任意冗余度。同时,在冗余度介于0~40%时,虚拟机与数据结点的匹配时间平均降低了67.1%,并且算法运行时间平均降低了12.8%。  相似文献   

9.
ABSTRACT

Constrained finite-horizon linear-quadratic optimal control problems are studied within the context of discrete-time dynamics that arise from the series interconnection of subsystems. A structured algorithm is devised for computing the Newton-like steps of primal-dual interior-point methods for solving a particular re-formulation of the problem as a quadratic program. This algorithm has the following properties: (i) the computation cost scales linearly in the number of subsystems along the cascade; and (ii) the computations can be distributed across a linear processor network, with localised problem data dependencies between the processor nodes and low communication overhead. The computation cost of the approach, which is based on a fixed permutation of the primal and dual variables, scales cubically in the time horizon of the original optimal control problem. Limitations in these terms are explored as part of a numerical example. This example involves application of the main results to model data for the cascade dynamics of an automated irrigation channel in particular.  相似文献   

10.
随着云计算的普及,其安全问题变得越来越重要。本文通过研究存储虚拟化和网络虚拟化技术,设计并部署了一个弹性和安全的虚拟化集群。该集群利用虚拟化技术把云计算用户空间在逻辑上相互隔离,有效地解决了共享性和安全性之间的矛盾。该集群也具有良好的可扩展性、高可用性和较低的成本。  相似文献   

11.
Live virtual machine (VM) migration is a technique for achieving system load balancing in a cloud environment by transferring an active VM from one physical host to another. This technique has been proposed to reduce the downtime for migrating overloaded VMs, but it is still time- and cost-consuming, and a large amount of memory is involved in the migration process. To overcome these drawbacks, we propose a Task-based System Load Balancing method using Particle Swarm Optimization (TBSLB-PSO) that achieves system load balancing by only transferring extra tasks from an overloaded VM instead of migrating the entire overloaded VM. We also design an optimization model to migrate these extra tasks to the new host VMs by applying Particle Swarm Optimization (PSO). To evaluate the proposed method, we extend the cloud simulator (Cloudsim) package and use PSO as its task scheduling model. The simulation results show that the proposed TBSLB-PSO method significantly reduces the time taken for the load balancing process compared to traditional load balancing approaches. Furthermore, in our proposed approach the overloaded VMs will not be paused during the migration process, and there is no need to use the VM pre-copy process. Therefore, the TBSLB-PSO method will eliminate VM downtime and the risk of losing the last activity performed by a customer, and will increase the Quality of Service experienced by cloud customers.  相似文献   

12.
李婧  陈光宇  唐菱  王瑞琦 《控制与决策》2020,35(11):2752-2760
鉴于当前多态系统可用性模型刻画分层性能要求的不足,定义一种更加通用的双层多态加权$k/n$系统,提出新的分层运算符结合通用生成函数的方法解决权重的跨层次依赖问题,建立系统可用度模型;针对多态系统冗余设计面临的“子系统和部件综合选择问题”,构建可用度约束下的系统总成本优化模型,利用遗传算法编程获取各子系统和部件的经济数量;以某供电系统为例验证所提出模型及方法的正确性和有效性,并对比说明双层性能要求对系统可用性和经济性的影响. 研究成果可为复杂系统在不同性能要求下的设计优化提供决策支持.  相似文献   

13.
Cloud computing provides scalable computing and storage resources over the Internet. These scalable resources can be dynamically organized as many virtual machines (VMs) to run user applications based on a pay-per-use basis. The required resources of a VM are sliced from a physical machine (PM) in the cloud computing system. A PM may hold one or more VMs. When a cloud provider would like to create a number of VMs, the main concerned issue is the VM placement problem, such that how to place these VMs at appropriate PMs to provision their required resources of VMs. However, if two or more VMs are placed at the same PM, there exists certain degree of interference between these VMs due to sharing non-sliceable resources, e.g. I/O resources. This phenomenon is called as the VM interference. The VM interference will affect the performance of applications running in VMs, especially the delay-sensitive applications. The delay-sensitive applications have quality of service (QoS) requirements in their data access delays. This paper investigates how to integrate QoS awareness with virtualization in cloud computing systems, such as the QoS-aware VM placement (QAVMP) problem. In addition to fully exploiting the resources of PMs, the QAVMP problem considers the QoS requirements of user applications and the VM interference reduction. Therefore, in the QAVMP problem, there are following three factors: resource utilization, application QoS, and VM interference. We first formulate the QAVMP problem as an Integer Linear Programming (ILP) model by integrating the three factors as the profit of cloud provider. Due to the computation complexity of the ILP model, we propose a polynomial-time heuristic algorithm to efficiently solve the QAVMP problem. In the heuristic algorithm, a bipartite graph is modeled to represent all the possible placement relationships between VMs and PMs. Then, the VMs are gradually placed at their preferable PMs to maximize the profit of cloud provider as much as possible. Finally, simulation experiments are performed to demonstrate the effectiveness of the proposed heuristic algorithm by comparing with other VM placement algorithms.  相似文献   

14.
Cloud computing has recently emerged as a new paradigm to provide computing services through large-size data centers where customers may run their applications in a virtualized environment. The advantages of cloud in terms of flexibility and economy encourage many enterprises to migrate from local data centers to cloud platforms, thus contributing to the success of such infrastructures. However, as size and complexity of cloud infrastructures grow, scalability issues arise in monitoring and management processes. Scalability issues are exacerbated because available solutions typically consider each virtual machine (VM) as a black box with independent characteristics, which is monitored at a fine-grained granularity level for management purposes, thus generating huge amounts of data to handle. We claim that scalability issues can be addressed by leveraging the similarity between VMs in terms of resource usage patterns. In this paper, we propose an automated methodology to cluster similar VMs starting from their resource usage information, assuming no knowledge of the software executed on them. This is an innovative methodology that combines the Bhattacharyya distance and ensemble techniques to provide a stable evaluation of similarity between probability distributions of multiple VM resource usage, considering both system- and network-related data. We evaluate the methodology through a set of experiments on data coming from an enterprise data center. We show that our proposal achieves high and stable performance in automatic VMs clustering, with a significant reduction in the amount of data collected which allows to lighten the monitoring requirements of a cloud data center.  相似文献   

15.
泥石流具有时间和空间群发性、运动持续性、灾害链式性、类型单一性、致灾方式和受灾对象的多样性特点;灾害救援中心位置选取过程是典型的非线性不收敛问题;单纯以线性技术和约束条件为基础的模型面对复杂泥石流灾害指标的情况下,会出现较大偏差;提出基于伪卫星测试结合并行计算的灾害救援中心选定模型设计方法;以泥石流灾害为例,利用离散元法和GPU并行算法构建泥石流运动堆积模型,将沟道泥石流汇流后运动堆积特点和灾害区域当作模拟参照,对泥石流运动进行模拟,根据模拟结果,预测泥石流覆盖范围;利用伪卫星技术计算灾害扩散程度和易发点距离,在救援中心搭建费用与救援成本最低前提下,将用户选择的位置与高度作为已知条件,以期望损失最低为约束,以精度因子为标准得到救援选址的核心区域点,构建基于伪卫星测试的灾害救援中心选定模型;仿真实验表明,设计模型的选址精度和效率较高,可为泥石流灾害救援提供数据支撑。  相似文献   

16.
To assess the availability of different data center configurations, understand the main root causes of data center failures and represent its low-level details, such as subsystem's behavior and their interconnections, we have proposed, in previous works, a set of stochastic models to represent different data center architectures (considering three subsystems: power, cooling, and IT) based on the TIA-942 standard. In this paper, we propose the Data Center Availability (DCAV), a web-based software system to allow data center operators to evaluate the availability of their data center infrastructure through a friendly interface, without need of understanding the technical details of the stochastic models. DCAV offers an easy step-by-step interface to create and configure a data center model. The main goal of the DCAV system is to abstract low-level details and modeling complexities, becoming the data center availability analysis a simple and less time-consuming task.  相似文献   

17.
Cloud Computing has revolutionized the software, platform and infrastructure provisioning. Infrastructure-as-a-Service (IaaS) providers offer on-demand and configurable Virtual Machine (VMs) to tenants of cloud computing services. A key consolidation force that widespread IaaS deployment is the use of pay-as-you-go and pay-as-you-use cost models. In these models, a service price can be composed of two dimensions: the individual consumption, and a proportional value charged for service maintenance. A common practice for public providers is to dilute both capital and operational costs on predefined pricing sheets. In this context, we propose PSVE (Proportional-Shared Virtual Energy), a cost model for IaaS providers based on CPU energy consumption. Aligned with traditional commodity prices, PSVE is composed of two key elements: an individualized cost accounted from CPU usage of VMs (e.g., processing and networking), and a shared cost from common hypervisor management operations, proportionally distributed among VMs.  相似文献   

18.
钟睿明  刘川意  王春露  项菲 《软件学报》2014,25(8):1874-1886
数据可靠性保证和容灾成本控制对云提供商而言是一个相互矛盾的问题.在分析已有数据保障机制的基础上,设计了一个基于多个云平台的分布式富云容灾模型,利用富云容灾系统,私有云提供商可以借用其他云平台的虚拟资源对自身数据进行冗余备份.为了减少数据传输的响应时间,富云容灾模型通过设置多个地理位置隔离的富云代理实现云平台用户任务的分配,减少私有云平台的工作负荷.针对富云容灾系统的成本优化和数据可靠性保证问题,提出了一种成本相关的云计算服务数据可靠性保证算法CAHRPA.该算法根据数据传输带宽和容灾费用在多个云平台中动态选择数据副本的存放位置,从而以一种成本优化的方式为云提供商提供数据容灾方案.实验结果表明,CAHRPA 能够在保证数据可靠性的同时,实现一种成本更低的数据容灾策略.  相似文献   

19.
Cloud computing offers new computing paradigms, capacity and flexible solutions to high performance computing (HPC) applications. For example, Hardware as a Service (HaaS) allows users to provide a large number of virtual machines (VMs) for computation-intensive applications using the HaaS model. Due to the large number of VMs and electronic components in HPC system in the cloud, any fault during the execution would result in re-running the applications, which will cost time, money and energy. In this paper we presented a proactive fault tolerance (FT) approach to HPC systems in the cloud to reduce the wall-clock execution time and dollar cost in the presence of faults. We also developed a generic FT algorithm for HPC systems in the cloud. Our algorithm does not rely on a spare node prior to prediction of a failure. We also developed a cost model for executing computation-intensive applications on HPC systems in the cloud. We analysed the dollar cost of provisioning spare nodes and checkpointing FT to assess the value of our approach. Our experimental results obtained from a real cloud execution environment show that the wall-clock execution time and cost of running computation-intensive applications in cloud can be reduced by as much as 30%. The frequency of checkpointing of computation-intensive applications can be reduced up to 50% with our FT approach for HPC in the cloud compared with current FT approaches.  相似文献   

20.
Wu  Hao  Chen  Xin  Song  Xiaoyu  Zhang  Chi  Guo  He 《The Journal of supercomputing》2021,77(1):679-710

With the wide deployment of cloud computing in scientific computing, cost minimization is increasingly critical for large-scale scientific workflow. Unfortunately, due to the highly intricate directed acyclic graph (DAG)-based workflow and the flexible usage of virtual machines (VMs) in cloud platform, the existing workflow scheduling approaches are inefficient to strike a balance between the parallelism and the topology of the DAG-based workflow while using the VMs, which causes a low utilization of VMs and consumes more cost. To address these issues, this paper presents a novel task scheduling framework named cost minimization approach with the DAG splitting method (COMSE) for minimizing the cost of running a deadline-constrained large-scale scientific workflow. First, we provide comprehensive theoretical analyses on how to improve the utilization of a resource-balanced multi-vCPU VM for running multiple tasks simultaneously. Second, considering the balance between the parallelism and the topology of a workflow, we simplify the DAG-based workflow, and based on the simplified DAG, a DAG splitting method is devised to preprocess the workflow. Third, since the cloud is charged by hours, we also design an exact algorithm to find the optimal operation pattern for a given schedule to make the consumed instance hours minimum, and this algorithm is named as instance hours minimization by Dijkstra (TOID). Finally, by employing the DAG splitting method and the TOID, the COMSE schedules a deadline-constrained large-scale scientific workflow on the multi-vCPU VMs and incorporates two important objects: minimizing the computation cost and the communication cost. Our solution approach is evaluated through rigorous performance evaluation study using real-word workflows, and the results show that the proposed COMSE approach outperforms existing algorithms in terms of computation cost and communication cost.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号