首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Cloud datacenters host hundreds of thousands of physical servers that offer computing resources for executing customer jobs. While the failures of these physical machines are considered normal rather than exceptional, in large-scale distributed systems and cloud datacenters evaluation of availability in a datacenter is essential for both cloud providers and customers. Although providing a highly available and reliable computing infrastructure is essential to maintaining customer confidence, cloud providers desire to have highly utilized datacenters to increase the profit level of delivered services. Cloud computing architectural solutions should thus take into consideration both high availability for customers and highly utilized resources to make delivering services more profitable for cloud providers. This paper presents a highly reliable cloud architecture by leveraging the 80/20 rule. This architecture uses the 80/20 rule (80% of cluster failures come from 20% of physical machines) to identify failure-prone physical machines by dividing each cluster into reliable and risky sub-clusters. Furthermore, customer jobs are divided into latency-sensitive and latency-insensitive types. The results showed that only about 1% of all requested jobs are extreme latency-sensitive and require availability of 99.999%. By offering services to revenue-generating jobs, which are less than 50% of all requested jobs, within the reliable subcluster of physical machines, cloud providers can make their businesses more profitable by preventing service level agreement violation penalties and improving their reputations.  相似文献   

2.
Cloud computing is a form of distributed computing, which promises to deliver reliable services through next‐generation data centers that are built on virtualized compute and storage technologies. It is becoming truly ubiquitous and with cloud infrastructures becoming essential components for providing Internet services, there is an increase in energy‐hungry data centers deployed by cloud providers. As cloud providers often rely on large data centers to offer the resources required by the users, the energy consumed by cloud infrastructures has become a key environmental and economical concern. Much energy is wasted in these data centers because of under‐utilized resources hence contributing to global warming. To conserve energy, these under‐utilized resources need to be efficiently utilized and to achieve this, jobs need to be allocated to the cloud resources in such a way so that the resources are used efficiently and there is a gain in performance and energy efficiency. In this paper, a model for energy‐aware resource utilization technique has been proposed to efficiently manage cloud resources and enhance their utilization. It further helps in reducing the energy consumption of clouds by using server consolidation through virtualization without degrading the performance of users’ applications. An artificial bee colony based energy‐aware resource utilization technique corresponding to the model has been designed to allocate jobs to the resources in a cloud environment. The performance of the proposed algorithm has been evaluated with the existing algorithms through the CloudSim toolkit. The experimental results demonstrate that the proposed technique outperforms the existing techniques by minimizing energy consumption and execution time of applications submitted to the cloud. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
From cloud computing to cloud manufacturing   总被引:17,自引:0,他引:17  
Cloud computing is changing the way industries and enterprises do their businesses in that dynamically scalable and virtualized resources are provided as a service over the Internet. This model creates a brand new opportunity for enterprises. In this paper, some of the essential features of cloud computing are briefly discussed with regard to the end-users, enterprises that use the cloud as a platform, and cloud providers themselves. Cloud computing is emerging as one of the major enablers for the manufacturing industry; it can transform the traditional manufacturing business model, help it to align product innovation with business strategy, and create intelligent factory networks that encourage effective collaboration. Two types of cloud computing adoptions in the manufacturing sector have been suggested, manufacturing with direct adoption of cloud computing technologies and cloud manufacturing—the manufacturing version of cloud computing. Cloud computing has been in some of key areas of manufacturing such as IT, pay-as-you-go business models, production scaling up and down per demand, and flexibility in deploying and customizing solutions. In cloud manufacturing, distributed resources are encapsulated into cloud services and managed in a centralized way. Clients can use cloud services according to their requirements. Cloud users can request services ranging from product design, manufacturing, testing, management, and all other stages of a product life cycle.  相似文献   

4.
Each cloud service provider provides only an interface of its own cloud infrastructure for enabling clients to use its cloud resources. However, there is a number of difficulties for cloud providers to ensure proper functioning. One of the main problems of a cloud provider is the lack of resources to support a huge number of on-demand resources provisioning. Thus, resources cannot be distributed among different cloud providers since the federation is not the basic operation of the cloud provider. The most efficient way to overcome this problem is to extend the interface's cloud provider with an automatic negotiation to dynamically form the best agreement between the different cloud providers based on the service level agreement. In this article, we propose an extension for the Open Cloud Computing Interface which is the standardized interface for the cloud computing to support the automatic negotiation between the different cloud providers. To prove the efficiency and the effectiveness of our approach, we implement a prototype to evaluate the key presented in this article.  相似文献   

5.
With the development of cloud computing, IT users (individuals, enterprises and even public services providers) are transferring their jobs or businesses to public online services provided by professional information service companies. These information service companies provide applications as public resources to support the business operation of their customers. However, no cloud computing service vendor (CCSV) can satisfy the full functional information system requirements of its customers. As a result, its customers often have to simultaneously use services distributed in different clouds and do some connectivity jobs manually. Services convergence and multi-clouds integration will lead to new business models and trigger new integration technologies that provide solutions to satisfy IT users’ complicated requirements. This paper firstly reviews the development of cloud computing from business and technical viewpoints and then discusses requirements and challenges of services convergence and multi-clouds integrations. Thirdly, a model based architecture of multi-clouds integration is provided. Business logic modelling for cross-organizational collaboration, service modelling and operation modelling methods with relative model mapping technology are discussed in detail. Some key enabling technologies are also developed. At last, case studies are presented to illustrate the implementation of the technologies developed in the paper.  相似文献   

6.
Cloud computing allows execution and deployment of different types of applications such as interactive databases or web-based services which require distinctive types of resources. These applications lease cloud resources for a considerably long period and usually occupy various resources to maintain a high quality of service (QoS) factor. On the other hand, general big data batch processing workloads are less QoS-sensitive and require massively parallel cloud resources for short period. Despite the elasticity feature of cloud computing, fine-scale characteristics of cloud-based applications may cause temporal low resource utilization in the cloud computing systems, while process-intensive highly utilized workload suffers from performance issues. Therefore, ability of utilization efficient scheduling of heterogeneous workload is one challenging issue for cloud owners. In this paper, addressing the heterogeneity issue impact on low utilization of cloud computing system, conjunct resource allocation scheme of cloud applications and processing jobs is presented to enhance the cloud utilization. The main idea behind this paper is to apply processing jobs and cloud applications jointly in a preemptive way. However, utilization efficient resource allocation requires exact modeling of workloads. So, first, a novel methodology to model the processing jobs and other cloud applications is proposed. Such jobs are modeled as a collection of parallel and sequential tasks in a Markovian process. This enables us to analyze and calculate the efficient resources required to serve the tasks. The next step makes use of the proposed model to develop a preemptive scheduling algorithm for the processing jobs in order to improve resource utilization and its associated costs in the cloud computing system. Accordingly, a preemption-based resource allocation architecture is proposed to effectively and efficiently utilize the idle reserved resources for the processing jobs in the cloud paradigms. Then, performance metrics such as service time for the processing jobs are investigated. The accuracy of the proposed analytical model and scheduling analysis is verified through simulations and experimental results. The simulation and experimental results also shed light on the achievable QoS level for the preemptively allocated processing jobs.  相似文献   

7.
Information and communication technology (ICT) has a profound impact on environment because of its large amount of CO2 emissions. In the past years, the research field of “green” and low power consumption networking infrastructures is of great importance for both service/network providers and equipment manufacturers. An emerging technology called Cloud computing can increase the utilization and efficiency of hardware equipment. The job scheduler is needed by a cloud datacenter to arrange resources for executing jobs. In this paper, we propose a scheduling algorithm for the cloud datacenter with a dynamic voltage frequency scaling technique. Our scheduling algorithm can efficiently increase resource utilization; hence, it can decrease the energy consumption for executing jobs. Experimental results show that our scheme can reduce more energy consumption than other schemes do. The performance of executing jobs is not sacrificed in our scheme. We provide a green energy-efficient scheduling algorithm using the DVFS technique for Cloud computing datacenters.  相似文献   

8.
Cloud computing is an innovative computing paradigm designed to provide a flexible and low-cost way to deliver information technology services on demand over the Internet. Proper scheduling and load balancing of the resources are required for the efficient operations in the distributed cloud environment. Since cloud computing is growing rapidly and customers are demanding better performance and more services, scheduling and load balancing of the cloud resources have become very interesting and important area of research. As more and more consumers assign their tasks to cloud, service-level agreements (SLAs) between consumers and providers are emerging as an important aspect. The proposed prediction model is based on the past usage pattern and aims to provide optimal resource management without the violations of the agreed service-level conditions in cloud data centers. It considers SLA in both the initial scheduling stage and in the load balancing stage, and it looks into different objectives to achieve the minimum makespan, the minimum degree of imbalance, and the minimum number of SLA violations. The experimental results show the effectiveness of the proposed system compared with other state-of-the-art algorithms.  相似文献   

9.
This paper presents the procedure for comparing costs of leasing IT resources in a commercial computing cloud against those incurred in using on-premise resources. The procedure starts with calculating the number of computers as depending on parameters that describe application's features and execution conditions. By measuring required execution time for different parameter values, we determined that this dependence is a second-order polynomial. Polynomial coefficients were calculated by processing the results of fractional factorial design. On that basis we calculated costs of computing and storage resources required for the application to run. The same calculation model can be applied to both a personal user and a cloud provider. The results will differ because of different hardware exploitation levels and the economy of scale effects. Such calculation enables cloud providers to determine marginal costs in their services' price, and allows users to calculate costs they would incur by executing the same application using their own resources.  相似文献   

10.
夏之斌  毛京丽  齐开诚 《软件》2013,(9):130-132
云计算技术是IT产业界的一场技术革命,已经成为IT行业未来发展的方向,这种变化使得IT基础架构的运营专业化程度不断集中和提高。在云计算的使用中,云计算使用者缺乏对于网络的配置能力,这部分目前并没有开放给用户所使用。虽然云计算的虚拟网络服务已经受到了更多云计算提供商的关注,但是目前对这方面的支持还处于不完善的阶段。本文提出了一种基于云计算的虚拟网络管理系统,本系统能够以动态的方式为用户提供基于云计算的网络服务,可以根据用户需求实现虚拟网络配置,并加以优化,以便最大限度地提高虚拟网络的性能。  相似文献   

11.
Cloud computing has attracted great interest from both academic and industrial communities. Different paradigms, architectures and applications based on the concept of cloud have emerged. Although many of them have been quite successful, efforts are mainly focusing on the study and implementation of particular setups. However, a generic and more flexible solution for cloud construction is missing. In this paper, we present a composition-based approach for cloud computing (compositional cloud) using Imperial College Cloud (IC Cloud) as a demonstration example. Instead of studying a specific cloud computing system, our approach aims to enable a generic framework where various cloud computing architectures and implementation strategies can be systematically studied. With our approach, cloud computing providers/adopters are able to design and compose their own systems in a quick and flexible manner. Cloud computing systems will no longer be in fixed shapes but will be dynamic and adjustable according to the requirements of different application domains.  相似文献   

12.
随着新型基础设施建设(新基建)的加速,云计算将获得新的发展契机.数据中心作为云计算的基础设施,其内部服务器不断升级换代,这造成计算资源的异构化.如何在异构云环境下,对作业进行高效调度是当前的研究热点之一.针对异构云环境多目标优化调度问题,设计一种AHP定权的多目标强化学习作业调度方法.首先定义执行时间、平台运行能耗、成...  相似文献   

13.
Fog and Cloud computing are ubiquitous computing paradigms based on the concepts of utility and grid computing. Cloud service providers permit flexible and dynamic access to virtualized computing resources on pay-per-use basis to the end users. The users having mobile device will like to process maximum number of applications locally by defining fog layer to provide infrastructure for storage and processing of applications. In case demands for resources are not being satisfied by fog layer of mobile device then job is transferred to cloud for processing. Due to large number of jobs and limited resources, fog is prone to deadlock at very large scale. Therefore, Quality of Service (QoS) and reliability are important aspects for heterogeneous fog and cloud framework. In this paper, Social Network Analysis (SNA) technique is used to detect deadlock for resources in fog layer of mobile device. A new concept of free space fog is proposed which helps to remove deadlock by collecting available free resource from all allocated jobs. A set of rules are proposed for a deadlock manager to increase the utilization of resources in fog layer and decrease the response time of request in case deadlock is detected by the system. Two different clouds (public cloud and virtual private cloud) apart from fog layer and free space fog are used to manage deadlock effectively. Selection among them is being done by assigning priorities to the requests and providing resources accordingly from fog and cloud. Therefore, QoS as well as reliability to users can be provided using proposed framework. Cloudsim is used to evaluate resource utilization using Resource Pool Manager (RPM). The results show the effectiveness of proposed technique.  相似文献   

14.
Efficient resource allocation of computational resources to services is one of the predominant challenges in a cloud computing environment. Furthermore, the advent of cloud brokerage and federated cloud computing systems increases the complexity of cloud resource management. Cloud brokers are considered third party organizations that work as intermediaries between the service providers and the cloud providers. Cloud brokers rent different types of cloud resources from a number of cloud providers and sublet these resources to the requesting service providers. In this paper, an autonomic performance management approach is introduced that provides dynamic resource allocation capabilities for deploying a set of services over a federated cloud computing infrastructure by considering the availability as well as the demand of the cloud computing resources. A distributed control based approach is used for providing autonomic computing features to the proposed framework via a feedback-based control loop. This distributed control based approach is developed using one of the decomposition–coordination methodologies, named interaction balance, for interactive bidding of cloud computing resources. The primary goals of the proposed approach are to maintain the service level agreements, maximize the profit, and minimize the operating cost for the service providers and the cloud broker. The application of interaction balance methodology and prioritization of profit maximization for the cloud broker and the service providers during resource allocation are novel contributions of the proposed approach.  相似文献   

15.
Virtualization technology makes data centers more dynamic and easier to administrate. Today, cloud providers offer customers access to complex applications running on virtualized hardware. Nevertheless, big virtualized data centers become stochastic environments and the simplification on the user side leads to many challenges for the provider. He has to find cost-efficient configurations and has to deal with dynamic environments to ensure service level objectives (SLOs). We introduce a software solution that reduces the degree of human intervention to manage clouds. It is designed as a multi-agent system (MAS) and placed on top of the Infrastructure as a Service (IaaS) layer. Worker agents allocate resources, configure applications, check the feasibility of requests, and generate cost estimates. They are equipped with application specific knowledge allowing it to estimate the type and number of necessary resources. During runtime, a worker agent monitors the job and adapts its resources to ensure the specified quality of service—even in noisy clouds where the job instances are influenced by other jobs. They interact with a scheduler agent, which takes care of limited resources and does a cost-aware scheduling by assigning jobs to times with low costs. The whole architecture is self-optimizing and able to use public or private clouds. Building a private cloud needs to face the challenge to find a mapping of virtual machines (VMs) to hosts. We present a rule-based mapping algorithm for VMs. It offers an interface where policies can be defined and combined in a generic way. The algorithm performs the initial mapping at request time as well as a remapping during runtime. It deals with policy and infrastructure changes. An energy-aware scheduler and the availability of cheap resources provided by a spot market are analyzed. We evaluated our approach by building up an SaaS stack, which assigns resources in consideration of an energy function and that ensures SLOs of two different applications, a brokerage system and a high-performance computing software. Experiments were done on a real cloud system and by simulations.  相似文献   

16.
Elasticity can be seen as the ability of a system to increase or decrease the computing resources allocated in a dynamic and on demand way. It is an important feature provided by cloud computing, that has been widely used in web applications and is also gaining attention in the scientific community. Considering the possibilities of using elasticity in this context, a question arises: “Are the available public cloud solutions suitable to provide elasticity to scientific applications?” To answer the question, in a first moment we present a survey on the use of cloud computing in scientific scenarios, providing an overview of the subject. Next, we describe the elasticity mechanisms offered by major public cloud providers and analyzes the limitations of the solutions in providing elasticity for scientific applications. As the main contribution of the article, we also present an analysis over some initiatives that are being developed to overcome the current challenges. In our opinion, current computational clouds are developing rapidly but have not yet reached the necessary maturity level to meet all scientific applications elasticity requirements. We expect that in the coming years the efforts being taken by numerous researchers in this area identify and address these challenges and lead to better and more mature technologies that will improve cloud computing practices.  相似文献   

17.
Cloud computing is a fast growing field, which is arguably a new computing paradigm. In cloud computing, computing resources are provided as services over the Internet and users can access resources based on their payments. The issue of access control is an important security scheme in the cloud computing. In this paper, a Contract RBAC model with continuous services for user to access various source services provided by different providers is proposed. The Contract RBAC model extending from the well-known RBAC model in cloud computing is shown. The extending definitions in the model could increase the ability to meet new challenges. The Contract RBAC model can provide continuous services with more flexible management in security to meet the application requirements including Intra-cross cloud service and Inter-cross cloud service. Finally, the performance analyses between the traditional manner and the scheme are given. Therefore, the proposed Contract RBAC model can achieve more efficient management for cloud computing environments.  相似文献   

18.
目前,在新一代大规模互联网迅猛发展的背景下,产生的数据量也随之持续增长,这就导致用户的本地设备难以满足海量数据的存储和计算需求。与此同时,云计算作为一种经济高效且灵活的模式,具有易于使用、随用随付、不受时间和空间限制的优势,彻底改变了传统IT基础设施的提供和支付方式,可以有效解决无限增长的海量信息存储和计算问题。因此,在没有昂贵的存储成本和计算资源消耗的情况下,资源有限的用户可以采用云服务提供商(Cloud Service Provider,CSP)为用户提供所期望的服务。其中,基础设施即服务(Infrastructure as a Service,IaaS)作为云计算的三种服务类型之一,将虚拟化、分布式计算和网络存储等技术结合,可以在互联网上提供和租用计算基础设施资源服务(如计算、存储和网络)。故云计算依靠IaaS层提供的计算基础设施资源,使用户不再需要购买额外设备,从而大大降低使用成本,同时也为上层服务奠定基础。然而,随着云计算服务的不断发展,基于IaaS的安全问题引起人们的关注。为了系统了解IaaS的安全研究进展和现状,本文对IaaS的安全问题以及学术界和工业界的解决方案进行了详细调查。首先,本文介绍IaaS的相关理论基础并对分析不同类型的云安全威胁。然后,从学术界现有研究出发,分析IaaS提供的计算、存储和网络服务中存在的安全威胁,并调查现有的解决方案。此外,对工业界中云服务提供商的IaaS安全服务进行重点调查,包括数据安全、网络防护和其他安全服务等方面。最终,展望未来IaaS云安全在学术和工业环境中的发展趋势。  相似文献   

19.
Cloud computing is an upcoming and promising solution for utility computing that provides resources on demand. As it has grown into a business model, a large number of cloud service providers exist today in the cloud market, which further is expanding exponentially. Many cloud service providers, with almost similar functionality, pose a selection problem to the cloud users. To assist the users in the best service selection, as per its requirement, a framework has been developed in which users list their quality of service (QoS) expectation, while service providers express their offerings. Experience of the existing cloud users is also taken into account in order to select the best cloud service provider. This work identifies some new QoS metrics, besides few existing ones, and defines it in a way that eases both the user and the provider to express their expectations and offers, respectively, in a quantified manner. Further, a dynamic and flexible model, using a variant of ranked voting method, is proposed that considers users' requirement and suggests the best cloud service provider. Case studies affirm the correctness and the effectiveness of the proposed model. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
随着应用程序计算需求的快速增长,异构计算资源不断地增多,任务调度成为云计算领域中重要的研究问题。任务调度负责将用户任务匹配给合适的虚拟计算资源,算法的优劣将直接影响响应时间、最大完工时间、能耗、成本、资源利用率等一系列与用户和云服务供应商经济利益密切相关的性能指标大小。针对独立任务和科学工作流这两类云环境主流任务,结合不同云环境特征对任务调度算法研究进展进行综述和讨论。回顾梳理已有的任务调度类型、调度机制及其优缺点;归纳单云环境和混合云、多云及联盟云等跨云环境下任务调度特征,并对部分相关典型文献的使用方法、优化目标、优缺点等方面进行阐述,在此基础上讨论各个环境下任务调度研究现状;进一步对各类环境下文献使用的调度优化方法进行梳理,明确其使用范围;总结并指出需要对计算数据密集型应用在跨云环境下的任务调度研究进行重点关注。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号