首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
近年来,云计算的发展为数据中心带来了新的应用场景和需求.其中,虚拟化作为云服务的重要使能技术,对数据中心服务器I/O系统的性能、扩展性和设备种类多样性提出了更高的要求,沿用传统设备与服务器紧耦合的I/O架构将会导致资源冗余,数据中心服务器密度降低,布线复杂度增加等诸多问题.因此,文章围绕I/O资源池化架构的实现机制和方法展开研究,目标是解除设备与服务器之间的绑定关系,实现接入服务器对I/O资源的按需弹性化使用,从根本上解决云计算数据中心的I/O系统问题.同时,还提出了一种基于单根I/O虚拟化协议实现多根I/O资源池化的架构,该架构通过硬件的外设部件高速互连接口多根域间地址和标识符映射机制,实现了多个物理服务器对同一I/O设备的共享复用;通过虚拟I/O设备热插拔技术和多根共享管理机制,实现了虚拟I/O资源在服务器间的实时动态分配;采用现场可编程门阵列(Field-Programmable Gate Array)构建了该架构的原型系统.结果表明,该架构能够为各个共享服务器提供良好的I/O操作性能.  相似文献   

2.
Cluster‐based solutions are being widely adopted for implementing flexible, scalable, low‐cost and high‐performance web server platforms. One of the main difficulties to implement these platforms is the correct dimensioning of the cluster size, so as to satisfy variable and peak demand periods. In this context, virtualization is being adopted by many organizations as a solution not only to provide service elasticity, but also to consolidate server workloads, and improve server utilization rates. A virtualized web server can be dynamically adapted to the client demands by deploying new virtual nodes when the demand increases, and powering off and consolidating virtual nodes during periods of low demand. Furthermore, the resources from the in‐house infrastructure can be complemented with a cloud provider (cloud bursting), so that peak demand periods can be satisfied by deploying cluster nodes in the external cloud, on an on‐demand basis. In this paper, we analyze the scalability of hybrid virtual infrastructures for two different distributed web server cluster implementations: a simple web cluster serving static files and a multi‐tier web server platform running the CloudStone benchmark. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

3.
云计算作为全新的计算模式,将数据中心的资源包括计算、存储等基础设施资源通过虚拟化技术以服务的形式交付给用户,使得用户可以通过互联网按需访问云内计算资源来运行应用.为面向用户提供更好的服务,分布式云跨区域联合多个云站点,创建巨大的资源池,同时利用地理分布优势改善服务质量.近年来分布式云的研究逐渐成为学术界和工业界的热点.文中围绕分布式云系统中研究的基本问题,介绍了国际国内的研究现状,包括分布式云系统的架构设计、资源调度与性能优化策略和云安全方案等,并展望分布式云的发展趋势.  相似文献   

4.
面向云计算模式运行环境可信性动态验证机制   总被引:1,自引:0,他引:1  
刘川意  林杰  唐博 《软件学报》2014,25(3):662-674
如何为用户提供一个可证明、可验证的可信运行环境,是云计算模式面临的重要问题.提出一种动态的用户运行环境可信性验证机制TCEE(trusted cloud execution environment).通过扩展现有可信链,将可信传递到用户虚拟机内部,并周期性地对用户运行环境的内存和文件系统进行完整性验证.TCEE引入可信第三方TTP(trusted third party),针对用户虚拟机运行环境的可信性进行远程验证和审计,避免了由用户维护可信验证的相关信息和机制,同时也能够避免云平台敏感信息的泄露.实现了基于TCEE的原型系统,对TCEE的有效性和性能代价进行定量测试和评价.实验结果表明,该机制可以有效检测针对内存和文件系统的典型威胁,且对用户运行环境引入的性能代价较小.  相似文献   

5.
6.
Although virtualization technologies bring many benefits to cloud computing environments, as the virtual machines provide more features, the middleware layer has become bloated, introducing a high overhead. Our ultimate goal is to provide hardware-assisted solutions to improve the middleware performance in cloud computing environments. As a starting point, in this paper, we design, implement, and evaluate specialized hardware instructions to accelerate GC operations. We select GC because it is a common component in virtual machine designs and it incurs high performance and energy consumption overheads. We performed a profiling study on various GC algorithms to identify the GC performance hotspots, which contribute to more than 50% of the total GC execution time. By moving these hotspot functions into hardware, we achieved an order of magnitude speedup and significant improvement on energy efficiency. In addition, the results of our performance estimation study indicate that the hardware-assisted GC instructions can reduce the GC execution time by half and lead to a 7% improvement on the overall execution time.  相似文献   

7.
Cloud computing has established itself as an interesting computational model that provides a wide range of resources such as storage, databases and computing power for several types of users. Recently, the concept of cloud computing was extended with the concept of federated clouds where several resources from different cloud providers are inter-connected to perform a common action (e.g. execute a scientific workflow). Users can benefit from both single-provider and federated cloud environment to execute their scientific workflows since they can get the necessary amount of resources on demand. In several of these workflows, there is a demand for high performance and parallelism techniques since many activities are data and computing intensive and can execute for hours, days or even weeks. There are some Scientific Workflow Management Systems (SWfMS) that already provide parallelism capabilities for scientific workflows in single-provider cloud. Most of them rely on creating a virtual cluster to execute the workflow in parallel. However, they also rely on the user to estimate the amount of virtual machines to be allocated to create this virtual cluster. Most SWfMS use this initial virtual cluster configuration made by the user for the entire workflow execution. Dimensioning the virtual cluster to execute the workflow in parallel is then a top priority task since if the virtual cluster is under or over dimensioned it can impact on the workflow performance or increase (unnecessarily) financial costs. This dimensioning is far from trivial in a single-provider cloud and specially in federated clouds due to the huge number of virtual machine types to choose in each location and provider. In this article, we propose an approach named GraspCC-fed to produce the optimal (or near-optimal) estimation of the amount of virtual machines to allocate for each workflow. GraspCC-fed extends a previously proposed heuristic based on GRASP for executing standalone applications to consider scientific workflows executed in both single-provider and federated clouds. For the experiments, GraspCC-fed was coupled to an adapted version of SciCumulus workflow engine for federated clouds. This way, we believe that GraspCC-fed can be an important decision support tool for users and it can help determining an optimal configuration for the virtual cluster for parallel cloud-based scientific workflows.  相似文献   

8.
The increasing deployment of artificial intelligence has placed unprecedent requirements on the computing power of cloud computing. Cloud service providers have integrated accelerators with massive parallel computing units in the data center. These accelerators need to be combined with existing virtualization platforms to partition the computing resources. The current mainstream accelerator virtualization solution is through the PCI passthrough approach, which however does not support fine-grained resource provisioning. Some manufacturers also start to provide time-sliced multiplexing schemes and use drivers to cooperate with specific hardware to divide resources and time slices to different virtual machines, which unfortunately suffer from poor portability and flexibility. One alternative but promising approach is based on API forwarding, which forwards the virtual machine''s request to the back-end driver for processing through a separate driver model. Yet, the communication due to API forwarding can easily become the performance bottleneck. This paper proposes Wormhole, an accelerator virtualization framework based on the C/S architecture that supports rapid delegated execution across virtual machines. It aims to provide upper-level users with an efficient and transparent way to accelerate the virtualization of accelerators with API forwarding while ensuring strong isolation between multiple users. By leveraging hardware virtualization feature, the framework minimizes performance degradation through exitless inter-VM control flow switch. Experimental results show that Wormhole''s prototype system can achieve up to 5 times performance improvement over the traditional open-source virtualization solution such as GVirtuS in the training test of the classic model.  相似文献   

9.
虚拟技术的最新进展为网格计算提供了封装资源的新方式,其封装性、隔离性和安全性能够有效屏蔽底层资源的异构性,根据用户应用需求定制执行环境,更好地适应于网格环境的复杂性和应用的多样性。为了满足当前服务网格的需求发展,基于新的虚拟机技术,研究适合于服务网格的虚拟环境部署运行管理系统,该系统为用户提供可视化、易操作的远程虚拟环境部署和运行管理功能;并实现一个标准的网格服务,结合服务网格平台CROWN,该服务可根据用户应用的特定需求动态透明地部署虚拟执行环境,并根据资源状态自适配调度执行用户任务。并对系统进行了实验分析,实验结果验证了系统的良好可用性和运行性能。  相似文献   

10.
虚拟化技术作为一种新的资源管理技术,正在高能物理领域得到越来越广泛的应用。静态虚拟机集群方式已经逐渐不能满足多作业队列对于计算资源动态的需求。为此,实现了一种云计算环境下面向多作业队列的弹性计算资源管理系统。系统通过高吞吐量计算系统HTCondor运行计算作业,使用开源的云计算平台Openstack管理虚拟计算节点,给出了一种结合虚拟资源配额服务,基于双阈值的弹性资源管理算法,实现资源池整体伸缩,同时设计了二级缓冲池以提高伸缩效率。目前系统已部署在高能所公共服务云IHEPCloud上,实际运行结果表明,当计算资源需求变化时系统能够动态调整各队列虚拟计算节点数量,同时计算资源的CPU利用率相比传统的资源管理方式有显著的提高。  相似文献   

11.
虚拟化技术是高性能计算系统规模化的关键技术。高能所计算资源虚拟实验床采用 OpenStack 云平台搭建环境。本文讨论了实现虚拟计算资源与计算系统相互融合的三个关键因素:网络架构设计、环境匹配和系统总体规划。本文首先讨论了虚拟网络架构。虚拟化平台通过部署 neutron 组件、OVS以及 802.1Q 协议来实现虚拟网络和物理网络的二层直连,通过配置物理交换机实现三层转发,避免了数据经过 OpenStack 网络节点转发的瓶颈。其次,虚拟计算资源要融入计算系统,需要与计算系统的各个组件进行信息的动态同步,以满足域名分配、自动化配置以及监视等系统的需要。文章介绍了自主开发的 NETDB 组件,该组件负责实现包括虚拟机与域名系统 (DNS)、自动化安装和管理系统 (puppet) 以及监视系统的信息动态同步等功能;最后,在系统总体规划中,文章讨论了包括统一认证、共享存储、自动化部署、规模扩展和镜像等内容。  相似文献   

12.
随着云计算的普及,大量的数据处理选择云服务来完成。现有算法较少考虑异构型系统中虚拟机计算能力的不同,导致某些任务等待时间过长。提出了虚拟机负载大小实时调整的算法。对云计算中资源虚拟化特征,给出一种评估虚拟机计算能力的方法。根据虚拟机能力和运行过程中的状态变化,自适应进行任务量大小调整,满足实时要求。通过任务调度,协调任务完成时间,保持各虚拟机负载的动态均衡,缩短长作业的总执行时间,提高了系统的吞吐量和整体服务能力,提升了效益。实验结果表明,本文算法能自适应地调整任务量大小,进行调度,以维持虚拟机负载均衡。  相似文献   

13.
Autonomic Clouds on the Grid   总被引:3,自引:0,他引:3  
Computational clouds constructed on top of existing Grid infrastructure have the capability to provide different entities with customized execution environments and private scheduling overlays. By designing these clouds to be autonomically self-provisioned and adaptable to changing user demands, user-transparent resource flexibility can be achieved without substantially affecting average job sojourn time. In addition, the overlay environment and physical Grid sites represent disjoint administrative and policy domains, permitting cloud systems to be deployed non-disruptively on an existing production Grid. Private overlay clouds administered by, and dedicated to the exclusive use of, individual Virtual Organizations are termed Virtual Organization Clusters. A prototype autonomic cloud adaptation mechanism for Virtual Organization Clusters demonstrates the feasibility of overlay scheduling in dynamically changing environments. Commodity Grid resources are autonomically leased in response to changing private scheduler loads, resulting in the creation of virtual private compute nodes. These nodes join a decentralized private overlay network system called IPOP (IP Over P2P), enabling the scheduling and execution of end user jobs in the private environment. Negligible overhead results from the addition of the overlay, although the use of virtualization technologies at the compute nodes adds modest service time overhead (under 10%) to computationally-bound Grid jobs. By leasing additional Grid resources, a substantial decrease (over 90%) in average job queuing time occurs, offsetting the service time overhead.  相似文献   

14.
In this study, we describe the ​further development of Elastic Cloud Computing Cluster (EC3), a tool ​for creating self-managed cost-efficient virtual hybrid elastic clusters on top of Infrastructure as a Service (IaaS) clouds. By using spot ​instances and checkpointing techniques, EC3 can significantly reduce the total ​execution cost as well as facilitating automatic fault tolerance. Moreover, EC3 can deploy and manage hybrid clusters across on-premises and public ​cloud resources, thereby introducing ​cloud bursting capabilities. ​We present the results of a case study that we conducted to assess the effectiveness of the tool ​based on the structural dynamic analysis of buildings. In addition, we evaluated the checkpointing algorithms in a real ​cloud environment with existing workloads to study their effectiveness. The results ​demonstrate the feasibility and benefits of this type of ​cluster for computationally intensive applications.  相似文献   

15.
With cloud and utility computing models gaining significant momentum, data centers are increasingly employing virtualization and consolidation as a means to support a large number of disparate applications running simultaneously on a chip-multiprocessor (CMP) server. In such environments, contention for shared platform resources (CPU cores, shared cache space, shared memory bandwidth, etc.) can have a significant effect on each virtual machine’s performance. In this paper, we investigate the shared resource contention problem for virtual machines by: (a) measuring the effects of shared platform resources on virtual machine performance, (b) proposing a model for estimating shared resource contention effects, and (c) proposing a transition from a virtual machine (VM) to a virtual platform architecture (VPA) that enables transparent shared resource management through architectural mechanisms for monitoring and enforcement. Our measurement and modeling experiments are based on a consolidation benchmark (vConsolidate) running on a state-of-the-art CMP server. Our virtual platform architecture experiments are based on detailed simulations of consolidation scenarios. Through detailed measurements and simulations, we show that shared resource contention affects virtual machine performance significantly and emphasize that virtual platform architectures is a must for future virtualized datacenters.  相似文献   

16.
The purpose of this article is to provide a cloud computing system-based architecture for library automation services. In this article, a consortium-based library automation system has been conceptualized and a model architecture has been proposed. The article describes the benefit of virtualization (Virtual Machine as a library) of cloud computing toward libraries and why/how a librarian can shift from an on-premise based library system to cloud-based system to overcome infrastructure requirement, staff requirements for administration, system maintenance, costs for backup, and so forth. The article outlines the traditional system of library automation. The proposed model provides cloud platform and application virtualization solutions for deploying and managing Service Oriented Architecture (SOA) for libraries. The model illustrates how a group of libraries, participating into a consortia, could collaborately obtain access to LMS software application through the virtual platform. This article is one of the first of its kind to propose such architecture for library automation.  相似文献   

17.
人工智能与各行业全面融合的浪潮方兴未艾,促使传统云平台拥抱以图形处理器(GPU)为代表的众核体系架构。为满足不同租户对于机器学习、深度学习等高密度计算的需求,使得传统云平台大力发展GPU虚拟化技术。安全作为云平台GPU虚拟化应用的关键环节,目前鲜有系统性的论述。因此,本文围绕云平台GPU虚拟化安全基本问题——典型GPU虚拟化技术给云平台引入的潜在安全威胁和GPU虚拟化的安全需求及安全防护技术演进趋势——展开。首先,深入分析了典型GPU虚拟化方法及其安全机制,并介绍了针对现有GPU虚拟化方法的侧信道、隐秘信道与内存溢出等攻击方法;其次,深入剖析了云平台GPU虚拟化所带来的潜在安全威胁,并总结了相应的安全需求;最后,提出了GPU上计算与内存资源协同隔离以确保多租户任务间的性能隔离、GPU任务行为特征感知以发现恶意程序、GPU任务安全调度、多层联合攻击阻断、GPU伴生信息脱敏等五大安全技术研究方向。本文希望为云平台GPU虚拟化安全技术发展与应用提供有益的参考。  相似文献   

18.
将虚拟机加入云计算环境,可充分利用云计算的资源共享优势及其并行、分布计算功能;提出了一种可根据需要动态添加或删除虚拟机的模型系统,可有效节约云计算的使用费用,提高成本效率;研究了可用于本模型系统的两种资源调度算法——自适应先到先得(Adaptive First Come First Serve,AFCFS)和最大者优先(Largest Job First Served,LJFS)算法,尽量避免不必要的延迟,最大可能地提高系统性能,因为这对于分布式系统资源调度算法十分重要;模拟实验中采用了响应时间、等待时间、到达率等性能指标及性价比这一成本指标,比较了几种算法的性能效率,研究验证了模型系统的成本效率。实验结果表明几种算法可高效地运用于云计算环境,并能提高系统性能效率和成本效率。  相似文献   

19.
Over the past few years, research and development in bioinformatics (e.g. genomic sequence alignment) has grown with each passing day fueling continuing demands for vast computing power to support better performance. This trend usually requires solutions involving parallel computing techniques because cluster computing technology reduces execution times and increases genomic sequence alignment efficiency. One example, mpiBLAST is a parallel version of NCBI BLAST that combines NCBI BLAST with message passing interface (MPI) standards. However, as most laboratories cannot build up powerful cluster computing environments, Grid computing framework concepts have been designed to meet the need. Grid computing environments coordinate the resources of distributed virtual organizations and satisfy the various computational demands of bioinformatics applications. In this paper, we report on designing and implementing a BioGrid framework, called G‐BLAST, that performs genomic sequence alignments using Grid computing environments and accessible mpiBLAST applications. G‐BLAST is also suitable for cluster computing environments with a server node and several client nodes. G‐BLAST is able to select the most appropriate work nodes, dynamically fragment genomic databases, and self‐adjust according to performance data. To enhance G‐BLAST capability and usability, we also employ a WSRF Grid Service Portal and a Grid Service GUI desk application for general users to submit jobs and host administrators to maintain work nodes. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

20.
With the increasing adoption of Big Data technologies as basic tools for the ongoing Digital Transformation, there is a high demand for data-intensive applications. In order to efficiently execute such applications, it is vital that cloud providers change the way hardware infrastructure resources are managed to improve their performance. However, the increasing use of virtualization technologies to achieve an efficient usage of infrastructure resources continuously widens the gap between applications and the underlying hardware, thus decreasing resource efficiency for the end user. Moreover, this scenario is especially troublesome for Big Data applications, as storage resources are one of the most heavily virtualized, thus imposing a significant overhead for large-scale data processing. This paper proposes a novel PaaS architecture specifically oriented for Big Data where the scheduler offers disks as resources alongside the more common CPU and memory resources, looking forward to provide a better storage solution for the user. Furthermore, virtualization overheads are reduced to the bare minimum by replacing heavy hypervisor-based technologies with operating-system-level virtualization based on light software containers. This architecture has been deployed on a Big Data infrastructure at the CESGA supercomputing center, used as a testbed to compare its performance with OpenStack, a popular private cloud platform. Results have shown significant performance improvements, reducing the execution time of representative Big Data workloads by up to 4.5×.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号