首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
虚拟化技术是高性能计算系统规模化的关键技术。高能所计算资源虚拟实验床采用 OpenStack 云平台搭建环境。本文讨论了实现虚拟计算资源与计算系统相互融合的三个关键因素:网络架构设计、环境匹配和系统总体规划。本文首先讨论了虚拟网络架构。虚拟化平台通过部署 neutron 组件、OVS以及 802.1Q 协议来实现虚拟网络和物理网络的二层直连,通过配置物理交换机实现三层转发,避免了数据经过 OpenStack 网络节点转发的瓶颈。其次,虚拟计算资源要融入计算系统,需要与计算系统的各个组件进行信息的动态同步,以满足域名分配、自动化配置以及监视等系统的需要。文章介绍了自主开发的 NETDB 组件,该组件负责实现包括虚拟机与域名系统 (DNS)、自动化安装和管理系统 (puppet) 以及监视系统的信息动态同步等功能;最后,在系统总体规划中,文章讨论了包括统一认证、共享存储、自动化部署、规模扩展和镜像等内容。  相似文献   

2.
从一般云平台和项目的实际需求出发,结合包簇架构,利用CloudStack开源云平台,设计一种基于包簇映射机制的实验平台。该实验平台采用分层的设计方法,包含硬件设施层、虚拟资源层、调度层、包簇中间件层以及用户应用层,将传统以虚拟机形式的资源申请转换成以需求包的形式进行申请,同时用户可以指定其需求包结构及每个包所需的物理资源。通过对云平台调度原理的分析,阐述如何将项目中涉及的包簇部署算法应用到该实验平台中,为后续改善项目的研究成果提供实验依据。最后选取云计算管理平台的6种重要的管理功能,通过对基于包簇机制的实验平台和CloudStack、OpenStack这2种云管理平台进行整体功能测试,结果表明本文的包簇实验平台提供了较全面的管理功能,具有一定的应用市场。  相似文献   

3.
传统的高能物理计算系统基于物理机集群,主要通过 Torque、Condor、LSF 等资源管理和作业调度系统将作业调度到物理机器上运行,缺少对虚拟化计算的接口支持。我们选取 OpenStack 作为底层虚拟化平台,设计并实现上层调度系统与 OpenStack 之间的桥梁,采用推拉结合的作业运行方式,构建虚拟计算集群。  相似文献   

4.
基于OpenStack云计算管理平台的原生资源管理技术,通过对接兼容Vmware云计算管理平台,来实现一种更为合理高效的业务保障技术。通过配置高可用HA来保障物理机、逻辑节点、裸机和虚拟机的业务可用性。通过可用域设置DRS来保障虚拟机的业务连续性。  相似文献   

5.
There are various significant issues in resource allocation, such as maximum computing performance and green computing, which have attracted researchers’ attention recently. Therefore, how to accomplish tasks with the lowest cost has become an important issue, especially considering the rate at which the resources on the Earth are being used. The goal of this research is to design a sub-optimal resource allocation system in a cloud computing environment. A prediction mechanism is realized by using support vector regressions (SVRs) to estimate the number of resource utilization according to the SLA of each process, and the resources are redistributed based on the current status of all virtual machines installed in physical machines. Notably, a resource dispatch mechanism using genetic algorithms (GAs) is proposed in this study to determine the reallocation of resources. The experimental results show that the proposed scheme achieves an effective configuration via reaching an agreement between the utilization of resources within physical machines monitored by a physical machine monitor and service level agreements (SLA) between virtual machines operators and a cloud services provider. In addition, our proposed mechanism can fully utilize hardware resources and maintain desirable performance in the cloud environment.  相似文献   

6.
高能物理实验规模不断扩大,计算和存储需求不断增长,即将建成的中国散裂中子源 (Chinese Spallation Neutron Source,CSNS) 对物理实验计算环境同样有较高要求。进入云计算时代,资源的灵活配置和集中管理不仅降低了硬件成本还大大提高了资源利用率。本文首先介绍了云计算技术的在高能物理实验中的应用现状,然后介绍中国科学院高能物理研究所东莞分部目前所建设的中国散裂中子源对于计算环境的的具体需求,接下来从基础运维、统一认证、存储系统、OpenStack、资源监控五个方面详细阐述了基于 OpenStack 的云计算环境的设计和实践,以及如何利用其实现对 CSNS 计算资源的弹性管理,最后对 CSNS 云计算环境的现状进行了总结并提出了对未来的展望。  相似文献   

7.
密码技术是云计算安全的基础,支持SR-IOV虚拟化的高性能密码卡适用于云密码机,可以为云计算环境提供虚拟化数据加密保护服务,满足安全需求.针对该类密码卡在云密码机使用过程中存在的兼容性不好、扩充性受限、迁移性差以及性价比低等问题,本文提出了基于I/O前后端模型的密码卡软件虚拟化方法,利用共享内存或者VIRTIO作为通信...  相似文献   

8.
Monitoring of cloud computing infrastructures is an imperative necessity for cloud providers and administrators to analyze, optimize and discover what is happening in their own infrastructures. Current monitoring solutions do not fit well for this purpose mainly due to the incredible set of new requirements imposed by the particular requirements associated to cloud infrastructures. This paper describes in detail the main reasons why current monitoring solutions do not work well. Also, it provides an innovative monitoring architecture that enables the monitoring of the physical and virtual machines available within a cloud infrastructure in a non-invasive and transparent way making it suitable not only for private cloud computing but also for public cloud computing infrastructures. This architecture has been validated by means of a prototype integrating an existing enterprise-class monitoring solution, Nagios, with the control and data planes of OpenStack, a well-known stack for cloud infrastructures. As a result, our new monitoring architecture is able to extend the exiting Nagios functionalities to fit in the monitoring of cloud infrastructures. The proposed architecture has been designed, implemented and released as open source to the scientific community. The proposal has also been empirically validated in a production-level cloud computing infrastructure running a test bed with up to 128 VMs where overhead and responsiveness has been carefully analyzed.  相似文献   

9.
针对传统网格计算中的监控工具已无法满足云计算平台中虚拟资源监控的问题,本文基于OpenStack云平台设计并实现一套完整的私有云平台中云主机资源监控方案.实验表明该方案可以有效地进行虚拟资源的监控,满足企业私有云中对虚拟资源的监控需求,且系统方案具有良好的可扩展性.  相似文献   

10.

In recent years, various studies on OpenStack-based high-performance computing have been conducted. OpenStack combines off-the-shelf physical computing devices and creates a resource pool of logical computing. The configuration of the logical computing resource pool provides computing infrastructure according to the user’s request and can be applied to the infrastructure as a service (laaS), which is a cloud computing service model. The OpenStack-based cloud computing can provide various computing services for users using a virtual machine (VM). However, intensive computing service requests from a large number of users during large-scale computing jobs may delay the job execution. Moreover, idle VM resources may occur and computing resources are wasted if users do not employ the cloud computing resources. To resolve the computing job delay and waste of computing resources, a variety of studies are required including computing task allocation, job scheduling, utilization of idle VM resource, and improvements in overall job’s execution speed according to the increase in computing service requests. Thus, this paper proposes an efficient job management of computing service (EJM-CS) by which idle VM resources are utilized in OpenStack and user’s computing services are processed in a distributed manner. EJM-CS logically integrates idle VM resources, which have different performances, for computing services. EJM-CS improves resource wastes by utilizing idle VM resources. EJM-CS takes multiple computing services rather than single computing service into consideration. EJM-CS determines the job execution order considering workloads and waiting time according to job priority of computing service requester and computing service type, thereby providing improved performance of overall job execution when computing service requests increase.

  相似文献   

11.
In this paper, we present the design and implementation of a novel model of distributed computing on the Internet and Intranet environment. Our model is called the Distributed Java Machine (DJM). It is a global distributed virtual machine used to realize the concept of ‘network is the computer’. DJM explores coarse-grained parallelism by using the under-utilized workstations on the network, combining the elements of object-oriented technology, distributed computing, World-Wide Web and Java programming. It can run on machines with heterogeneous hardware and software platforms without relinking or recompilation. DJM has two unique features. First, using an original applet helper mechanism, DJM allows machines without any DJM software and of different levels of ‘trust’ to work together. Secondly, DJM has implemented concurrency enhancement mechanisms (one-way message, future and redirected future) to increase the efficiency of method invocation. The prototype of DJM has been implemented and tested under both the Intranet and the Internet environments. Using the workstations from our teaching laboratories, which are already running under normal loading, experimental results show that we can achieve a speedup of about 5–8 times by 14 workstations in a Local Area Network (LAN) environment, and about 4.5 times speedup for eight workstations in a Wide Area Network (WAN) environment. © 1998 John Wiley & Sons, Ltd.  相似文献   

12.
A resource management framework for collaborative computing systems over multiple virtual machines (CCSMVM) is presented to increase the performance of computing systems by improving the resource utilization, which has constructed a scalable computing environment for resource on-demand utilization. We design a resource management framework based on the advantages of some components in grid computing platform, virtualized platform and cloud computing platform to reduce computing systems overheads and maintain workloads balancing with the supporting of virtual appliance, Xen API, applications virtualization and so on. The content of collaborate computing, the basis of virtualized resource management and some key technologies including resource planning, resource allocation, resource adjustment and resource release and collaborative computing scheduling are designed in detail. A prototype is designed, and some experiments have verified the correctness and feasibility of our prototype. System evaluations show that the time in resource allocation and resource release is proportional to the quantity of virtual machines, but not the time in the virtual machines migrations. CCSMVM has higher CPU utilization and better performance than other systems, such as Eucalyptus 2.0, Globus4.0, et al. It is concluded that CCSMVM can accelerate the execution of systems by improving average CPU utilization from the results of comparative analysis with other systems, so it is better than others. Our study on resource management framework has some significance to the optimization of the performance in virtual computing systems.  相似文献   

13.
In this paper, we propose a server architecture recommendation and automatic performance verification technology, which recommends and verifies appropriate server architecture on Infrastructure as a Service (IaaS) cloud with bare metal servers, container-based virtual servers and virtual machines. Recently, cloud services are spread, and providers provide not only virtual machines but also bare metal servers and container-based virtual servers. However, users need to design appropriate server architecture for their requirements based on three types of server performances, and users need much technical knowledge to optimize their system performance. Therefore, we study a technology that satisfies users’ performance requirements on these three types of IaaS cloud. Firstly, we measure performance and start-up time of a bare metal server, Docker containers, KVM (Kernel-based Virtual Machine) virtual machines on OpenStack with changing number of virtual servers. Secondly, we propose a server architecture recommendation technology based on the measured quantitative data. A server architecture recommendation technology receives an abstract template of OpenStack Heat and function/performance requirements and then creates a concrete template with server specification information. Thirdly, we propose an automatic performance verification technology that executes necessary performance tests automatically on provisioned user environments according to the template. We implement proposed technologies, confirm performance and show the effectiveness.  相似文献   

14.
The increasing deployment of artificial intelligence has placed unprecedent requirements on the computing power of cloud computing. Cloud service providers have integrated accelerators with massive parallel computing units in the data center. These accelerators need to be combined with existing virtualization platforms to partition the computing resources. The current mainstream accelerator virtualization solution is through the PCI passthrough approach, which however does not support fine-grained resource provisioning. Some manufacturers also start to provide time-sliced multiplexing schemes and use drivers to cooperate with specific hardware to divide resources and time slices to different virtual machines, which unfortunately suffer from poor portability and flexibility. One alternative but promising approach is based on API forwarding, which forwards the virtual machine''s request to the back-end driver for processing through a separate driver model. Yet, the communication due to API forwarding can easily become the performance bottleneck. This paper proposes Wormhole, an accelerator virtualization framework based on the C/S architecture that supports rapid delegated execution across virtual machines. It aims to provide upper-level users with an efficient and transparent way to accelerate the virtualization of accelerators with API forwarding while ensuring strong isolation between multiple users. By leveraging hardware virtualization feature, the framework minimizes performance degradation through exitless inter-VM control flow switch. Experimental results show that Wormhole''s prototype system can achieve up to 5 times performance improvement over the traditional open-source virtualization solution such as GVirtuS in the training test of the classic model.  相似文献   

15.
在OpenStack云平台中,一台物理服务器上可能同时运行着十几台虚拟机,这对于物理服务器的I/O性能要求是非常高的.因此,I/O虚拟化技术的效率对于整个OpenStack云平台的网络性能提升都有着至关重要的作用.为了提高系统整体的网络性能,在OpenStack云平台中引入SR-IOV技术成为了一种可选的方式.本文通过对比实验测试了SR-IOV技术对于OpenStack云平台上网络I/O性能的影响.最终对实验结果进行分析可知,在引入SR-IOV技术后,OpenStack云平台上的计算节点I/O虚拟化性能提升了大概50%.  相似文献   

16.
Cloud computing is emerging as an increasingly popular computing paradigm, allowing dynamic scaling of resources available to users as needed. This requires a highly accurate demand prediction and resource allocation methodology that can provision resources in advance, thereby minimizing the virtual machine downtime required for resource provisioning. In this paper, we present a dynamic resource demand prediction and allocation framework in multi‐tenant service clouds. The novel contribution of our proposed framework is that it classifies the service tenants as per whether their resource requirements would increase or not; based on this classification, our framework prioritizes prediction for those service tenants in which resource demand would increase, thereby minimizing the time needed for prediction. Furthermore, our approach adds the service tenants to matched virtual machines and allocates the virtual machines to physical host machines using a best‐fit heuristic approach. Performance results demonstrate how our best‐fit heuristic approach could efficiently allocate virtual machines to hosts so that the hosts are utilized to their fullest capacity. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
高能物理是典型的高性能计算的应用,对CPU计算能力要求很高,并且CPU利用率的高低直接影响高能物理的计算效率.虚拟化技术在实现资源共享和资源高利用率方面表现出很大的优势.基于KVM(Kernel-based Vir-tual Machine)虚拟机进行性能测试和性能优化.首先对KVM虚拟机的处理器、磁盘IO和网络IO等参数进行测试,给出虚拟机和物理机的性能差异和定量分析,然后从KVM虚拟机架构上分析影响KVM性能的各种因素,从硬件级、内核级对影响性能的因素包括扩展页表EPT(Extented Page Table)和CPU的亲和性(CPU affinity)展开研究,以对KVM进行性能优化.优化结果表明,KVM的CPU性能的损失率可以降低至3%左右.最后,给出了高能物理计算的虚拟集群,结果显示虚拟机群的计算性能能够满足高能物理计算的需求.  相似文献   

18.
Abstract

Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user’s applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.  相似文献   

19.
王菁  王岗  高晶  李寒  马倩 《计算机工程与科学》2015,37(11):2018-2024
随着教学信息化的不断深化,高校云平台越来越普及,但是实际应用中资源利用率仍然较低,核心问题在于当前的虚拟机调度机制未考虑高校教学应用的特征,从而导致负载不均和资源浪费。为了解决这一问题,从高校教学应用需求出发,提出了一种虚拟机动态调度算法(CRS),定义了课程虚拟机模型和物理机负载模型,并实现了基于OpenStack开源云平台的可对虚拟机进行动态调度的校园云平台。实验表明,提出的虚拟机动态调度方法达到了降低能耗及实现负载均衡的目标。  相似文献   

20.
陈鑫  徐义臻  郭禾  于玉龙  罗劼  王宇新 《计算机应用》2015,35(11):3059-3062
私有桌面云被广泛应用在集中计算、集中管理、远程办公等场景中.现有的私有桌面云多基于OpenStack云操作系统搭建,然而,该操作系统在使用时会出现虚拟机开启时间过长导致用户等待的问题,无法满足某些应用的高实时性要求.对此,使用模板镜像策略和网络连接存储策略作为云存储层解决方案,提出一种虚拟机可瞬时开启(ISVM)的私有桌面云架构.ISVM桌面云架构包括云管理层、云存储层、云服务层.经过测试和分析发现,ISVM私有桌面云架构的虚拟机开启时间约为OpenStack云平台虚拟机开启时间的1/100,达到了毫秒数量级,能够满足应用的实时性要求.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号