首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Today, data centers are the main source of providing cloud services through a service level agreement (SLA). Most research papers for cloud resource management concentrate on how to reduce host energy consumption and SLA violation (SLAV) to minimize operational cost. However, they do not consider the amount of penalty that cloud provider should pay to users because of SLAV. In this paper, we propose a new penalty‐aware and cost‐efficient method that considers cloud resource management as a cost problem. In this method parameters such as user budget, penalty, and host energy consumption cost play an important role in minimizing operational cost which leads to higher profit for cloud provider. The simulation results with CloudSim show that our proposed method minimizes operational cost compared to the prior resource managements. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
With the rapid development of cloud computing, the number of cloud users is growing exponentially. Data centers have come under great pressure, and the problem of power consumption has become increasingly prominent. However, many idle resources that are geographically distributed in the network can be used as resource providers for cloud tasks. These distributed resources may not be able to support the resource‐intensive applications alone because of their limited capacity; however, the capacity will be considerably increased if they can cooperate with each other and share resources. Therefore, in this paper, a new resource‐providing model called “crowd‐funding” is proposed. In the crowd‐funding model, idle resources can be collected to form a virtual resource pool for providing cloud services. Based on this model, a new task scheduling algorithm is proposed, RC‐GA (genetic algorithm for task scheduling based on a resource crowd‐funding model). For crowd‐funding, the resources come from different heterogeneous devices, so the resource stability should be considered different. The scheduling targets of the RC‐GA are designed to increase the stability of task execution and reduce power consumption at the same time. In addition, to reduce random errors in the evolution process, the roulette wheel selection operator of the genetic algorithm is improved. The experiment shows that the RC‐GA can achieve good results.  相似文献   

3.
In recent years, the increasing use of cloud services has led to the growth and importance of developing cloud data centers. One of the challenging issues in the cloud environments is high energy consumption in data centers, which has been ignored in the corporate competition for developing cloud data centers. The most important problems of using large cloud data centers are high energy costs and greenhouse gas emission. So, researchers are now struggling to find an effective approach to decreasing energy consumption in cloud data centers. One of the preferred techniques for reducing energy consumption is the virtual machines (VMs) placement. In this paper, we present a VM allocation algorithm to reduce energy consumption and Service Level Agreement Violation (SLAV). The proposed algorithm is based on best‐fit decreasing algorithm, which uses learning automata theory, correlation coefficient, and ensemble prediction algorithm to make better decisions in VM allocation. The experimental results indicated improvement regarding energy consumption and SLAV, compared with well‐familiar baseline VM allocation algorithms.  相似文献   

4.
Data centers play a crucial role in the delivery of cloud services by enabling on‐demand access to the shared resources such as software, platform and infrastructure. Virtual machine (VM) allocation is one of the challenging tasks in data center management since user requirements, typically expressed as service‐level agreements, have to be met with the minimum operational expenditure. Despite their huge processing and storage facilities, data centers are among the major contributors to greenhouse gas emissions of IT services. In this paper, we propose a holistic approach for a large‐scale cloud system where the cloud services are provisioned by several data centers interconnected over the backbone network. Leveraging the possibility to virtualize the backbone topology in order to bypass IP routers, which are major power consumers in the core network, we propose a mixed integer linear programming (MILP) formulation for VM placement that aims at minimizing both power consumption at the virtualized backbone network and resource usage inside data centers. Since the general holistic MILP formulation requires heavy and long‐running computations, we partition the problem into two sub‐problems, namely, intra and inter‐data center VM placement. In addition, for the inter‐data center VM placement, we also propose a heuristic to solve the virtualized backbone topology reconfiguration computation in reasonable time. We thoroughly assessed the performance of our proposed solution, comparing it with another notable MILP proposal in the literature; collected experimental results show the benefit of the proposed management scheme in terms of power consumption, resource utilization and fairness for medium size data centers. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
The scalability, reliability, and flexibility in the cloud computing services are the obligations in the growing demand of computation power. To sustain the scalability, a proper virtual machine migration (VMM) approach is needed with apt balance on quality of service and service‐level agreement violation. In this paper, a novel VMM algorithm based on Lion‐Whale optimization is developed by integrating the Lion optimization algorithm and the Whale optimization algorithm. The optimal virtual machine (VM) migration is performed by the Lion‐Whale VMM based on a new fitness function in the regulation of the resource use, migration cost, and energy consumption of VM placement. The experimentation of the proposed VM migration strategy is performed over 4 cloud setups with a different configuration which are simulated using CloudSim toolkit. The performance of the proposed method is validated over existing optimization‐based VMM algorithms, such as particle swarm optimization and genetic algorithm, using the performance measures, such as energy consumption, migration cost, and resource use. Simulation results reveal the fact that the proposed Lion‐Whale VMM effectively outperforms other existing approaches in optimal VM placement for cloud computing environment with reduced migration cost of 0.01, maximal resource use of 0.36, and minimal energy consumption of 0.09.  相似文献   

6.
Number of cloud data centers which consists of hundreds of hosts has increased tremendously around the world due to increase in demands for cloud services. It is expected energy consumption of data centers will reach 139.8 billion Kwh by 2020. Many algorithms are proposed to reduce energy consumption as well as service level agreement violationby minimizing the number of active hosts. Current proposed algorithms do not consider data center architecture, the physical position of hosts, and energy consumption of numerous switches that are in data centers. In this paper, a novel hierarchical cloud resource management is proposed that not only minimizes the number of hosts but also aggregates virtual machines on a limited subset of data center racks and modules to minimize energy consumption. Experimental results with Cloudsim show that our proposed algorithm reduces energy consumption up to 26% and service level agreement violation up to 96%.  相似文献   

7.
Cloud computing has drastically reduced the price of computing resources through the use of virtualized resources that are shared among users. However, the established large cloud data centers have a large carbon footprint owing to their excessive power consumption. Inefficiency in resource utilization and power consumption results in the low fiscal gain of service providers. Therefore, data centers should adopt an effective resource-management approach. In this paper, we present a novel load-balancing framework with the objective of minimizing the operational cost of data centers through improved resource utilization. The framework utilizes a modified genetic algorithm for realizing the optimal allocation of virtual machines (VMs) over physical machines. The experimental results demonstrate that the proposed framework improves the resource utilization by up to 45.21%, 84.49%, 119.93%, and 113.96% over a recent and three other standard heuristics-based VM placement approaches.  相似文献   

8.
Cloud computing has emerged as a promising technique to provide storage and computing component on‐demand services over a network. In this paper, we present an energy‐saving algorithm using the Kalman filter for cloud resource management to predict the workload and to further achieve high resource availability with low service level agreement. Using the proposed algorithm, one can estimate the potential future workload trend then predict the computing component workload utilizations and further retrench energy consumption and achieve load balancing in a cloud system. Experimental results show that the proposed algorithm achieves more than 92.22% accuracy in the computing component workload prediction, improves 55.11% energy in energy consumption, and has 3.71% in power prediction error rate, respectively. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
The massive growth of cloud computing has led to huge amounts of energy consumption and carbon emissions by a large number of servers. One of the major aspects of cloud computing is its scheduling of many task requests submitted by users. Minimizing energy consumption while ensuring the user's QoS preferences is very important to achieving profit maximization for the cloud service providers and ensuring the user's service level agreement (SLA). Therefore, in addition to implementing user's tasks, cloud data centers should meet the different criteria in applying the cloud resources by considering the multiple requirements of different users. Mapping of user requests to cloud resources for processing in a distributed environment is a well‐known NP‐hard problem. To resolve this problem, this paper proposes an energy‐efficient task‐scheduling algorithm based on best‐worst (BWM) and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) methodology. The main objective of this paper is to determine which cloud scheduling solution is more important to select. First, a decision‐making group identify the evaluation criteria. After that, a BWM process is applied to assign the importance weights for each criterion, because the selected criteria have varied importance. Then, TOPSIS uses these weighted criteria as inputs to evaluate and measure the performance of each alternative. The performance of the proposed and existing algorithms is evaluated using several benchmarks in the CloudSim toolkit and statistical testing through ANOVA, where the evaluation metrics include the makespan, energy consumption, and resource utilization.  相似文献   

10.
This paper proposed an energy‐aware cross‐layer mobile cloud resource allocation approach. In this paper, a hybrid cloud architecture is adopted for provisioning mobile service to mobile device users, which include nearby local cloud and remote public cloud. The computation‐intensive tasks can be processed by the remote public cloud, while the delay‐sensitive computation can be processed by the nearby local cloud. On the basis of the system context and mobile user preferences, the energy‐aware cross‐layer mobile cloud resource allocation approach can optimize the consumption of cloud resource and system performance. The cooperation and collaboration among local cloud agent, public cloud supplier, and mobile cloud user are regulated through the economic approach. The energy‐aware cross‐layer mobile cloud resource allocation is performed on the local cloud level and the public cloud level, which comprehensively considers the benefits of all participants. The energy‐aware cross‐layer mobile cloud resource allocation algorithm is proposed, which is evaluated in the experiment environment, and comparison results and analysis are discussed.  相似文献   

11.
This paper proposes a novel framework for virtual content delivery networks (CDNs) based on cloud computing. The proposed framework aims to provide multimedia content delivery services customized for content providers by sharing virtual machines (VMs) in the Infrastructure‐as‐a‐Service cloud, while fulfilling the service level agreement. Furthermore, it supports elastic virtual CDN services, which enables the capabilities of VMs to be scaled to encompass the dynamically changing resource demand of the aggregated virtual CDN services. For this, we provide the system architecture and relevant operations for the virtual CDNs and evaluate the performance based on a simulation.  相似文献   

12.
Cloud computing makes it possible for users to share computing power. The framework of multiple data centers gains a greater popularity in modern cloud computing. Due to the uncertainty of the requests from users, the loads of CPU(Center Processing Unit) of different data centers differ. High CPU utilization rate of a data center affects the service provided for users, while low CPU utilization rate of a data center causes high energy consumption. Therefore, it is important to balance the CPU resource across data centers in modern cloud computing framework. A virtual machine(VM)migration algorithm was proposed to balance the CPU resource across data centers. The simulation results suggest that the proposed algorithm has a good performance in the balance of CPU resource across data centers and reducing energy consumption.  相似文献   

13.
The cloud computing environment is a real‐time communication network that involves a large number of systems connected in a distributed fashion, for which resources are available on demand. In recent years, due to the enormous growth of data and information, data maintenance tasks involve a major effort in information technology (IT) industries. So, IT industries are concentrating on the cloud computing environment in order to maintain their data and manage their resources. Owing to the increase in the number of data centres, which have an impact on electrical energy cost, peak power dissipation, cooling and carbon emission, power‐conservation‐based resource management is essential. A best‐fit heuristic job placement algorithm is proposed in this paper in order to increase the job allocation percentage, a worst‐fit heuristic virtual machine (VM) placement algorithm is also proposed in order to place the VMs over the physical machines (PMs) thereby reducing the number of the latter allotted, and a server consolidation algorithm is proposed in order to improve power conservation. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
Since the raising of the cloud computing, the applications of web service have been extended rapidly. However, the data centers of cloud computing also cause the problem of power consumption and the resources usually have not been used effectively. Decreasing the power consumption and enhancing resource utilization become main issues in cloud computing environment. In this paper, we propose a method, called MBFDP (modified best fit decreasing packing), to decrease power consumption and enhance resource utilization of cloud computing servers. From the results of experiments, the proposed solution can reduce power consumption effectively and enhance the utilization of resources of servers.  相似文献   

15.
As cloud computing models have evolved from clusters to large-scale data centers, reducing the energy consumption, which is a large part of the overall operating expense of data centers, has received much attention lately. From a cluster-level viewpoint, the most popular method for an energy efficient cloud is Dynamic Right Sizing (DRS), which turns off idle servers that do not have any virtual resources running. To maximize the energy efficiency with DRS, one of the primary adaptive resource management strategies is a Virtual Machine (VM) migration which consolidates VM instances into as few servers as possible. In this paper, we propose a Two Phase based Adaptive Resource Management (TP-ARM) scheme that migrates VM instances from under-utilized servers that are supposed to be turned off to sustainable ones based on their monitored resource utilizations in real time. In addition, we designed a Self-Adjusting Workload Prediction (SAWP) method to improve the forecasting accuracy of resource utilization even under irregular demand patterns. From the experimental results using real cloud servers, we show that our proposed schemes provide the superior performance of energy consumption, resource utilization and job completion time over existing resource allocation schemes.  相似文献   

16.
Roopa  V.  Malarvizhi  K.  Karthik  S. 《Wireless Personal Communications》2021,117(4):3327-3342

In present decade, cloud computing provides utility-based IT services to the global consumers. According to pay-by-use manner, it facilitates hosting of persistent services from the user, business and technical fields. But, it is to be mentioned that the data centers hosting the cloud-based services utilize large amount of energy, power and resources. Hence, there is a need of an efficient resource management model for cloud that involves in reducing the resource consumption and computational cost. And, for managing the virtual resources with respect to the varying demands in cloud environment, dynamic virtual resource management is required. With that concern, this paper presents a model called Energy and Power Aware Dynamic Migration (EPADM). Based on the model design, the main objectives such as, efficient resource mapping and provisioning algorithms are presented. The dynamic Virtual Migration (VM) operation comprises the VM relocation and consolidation parts for achieving desirable results. Moreover, the paper also concentrates on reducing the SLA (Service Level Agreement) based violation, which is a significant factor to be considered on cloud. The proposed EPADM model is evaluated using the CloudSim toolkit. The results illustrate that the proposed model has massive potential than others, as it provides energy-power efficiency, reduced SLA violations under distinctive workload cases.

  相似文献   

17.
Service‐oriented architecture (SOA) has a crucial role in backing productive cloud services. Also, the vast spread of the theoretical notion of diverse businesses (like e‐commerce) into the actual use has been recently applied by cloud computing. The service functionality could be affected by overfilling of the network traffic because of the broadly dispersed nature of e‐commerce in clouds—a key challenge for immediate jobs. Throughout the last decade, a vast range of applications or large‐scale operators has increasingly attracted to migrate the services in clouds. An effective method for accessing the applications throughout standard business hours is continually moving virtual machine containers from one data center to another. Now, with the commonness of cloud computing, many applications have been moved to the cloud fully/partly. It can be handled through the migration of cloud services to diverse platforms in a way that minimizes the communication cost of e‐commerce. As this issue has an NP‐hard nature, in the present article, we present an automatic smart service migration outline through the ant colony optimization (ACO) algorithm on cloud‐oriented e‐commerce. In the presented model, we use the ACO algorithm to take the finest (near‐optimal) service migration decisions. Based on the obtained results, the proposed technique has the optimal number of migrations compared to the existing models.  相似文献   

18.
The fiber‐wireless (FiWi) access network is a very promising solution for next‐generation access networks. Because of the different protocols between its subnets, it is hard to globally optimize the operation of FiWi networks. Network virtualization technology is applied to FiWi networks to realize the coexistence of heterogeneous networks and centralized control of network resource. The existing virtual resource management methods always be designed to optimize virtual network (VN) request acceptance rate and survivability, but seldom consider energy consumption and varied requirements of quality of service (QoS) satisfaction, which is a hot and important topic in the industrial field. Therefore, this paper focuses on the QoS‐aware cross‐domain collaborative energy saving mechanism for FiWi virtual networks. First, the virtual network embedding (VNE) model, energy consumption model, and VNE profit model of FiWi networks are established. Then, a QoS‐aware in‐region VN embedding mechanism is proposed to guarantee service quality of different services. After that, an underlying resource updating mechanism based on energy efficiency awareness is designed to realize low‐load ONU and wireless routers co‐sleep in FiWi networks. Finally, a QoS‐aware re‐embedding mechanism is presented to allocate proper resource to the VNs affected by the sleeping mechanism. Especially for video VNs, a re‐embedding scheme which adopts traffic splitting and multipath route is introduced to meet resource limitation and low latency. Simulation results show that the proposed mechanism can reduce FiWi network's energy consumption, improve VNE profit, and ensure high embedding accepting rate and strict delay demand of high‐priority VNs.  相似文献   

19.
Cloud computing introduced a new paradigm in IT industry by providing on‐demand, elastic, ubiquitous computing resources for users. In a virtualized cloud data center, there are a large number of physical machines (PMs) hosting different types of virtual machines (VMs). Unfortunately, the cloud data centers do not fully utilize their computing resources and cause a considerable amount of energy waste that has a great operational cost and dramatic impact on the environment. Server consolidation is one of the techniques that provide efficient use of physical resources by reducing the number of active servers. Since VM placement plays an important role in server consolidation, one of the main challenges in cloud data centers is an efficient mapping of VMs to PMs. Multiobjective VM placement is generating considerable interest among researchers and academia. This paper aims to represent a detailed review of the recent state‐of‐the‐art multiobjective VM placement mechanisms using nature‐inspired metaheuristic algorithms in cloud environments. Also, it gives special attention to the parameters and approaches used for placing VMs into PMs. In the end, we will discuss and explore further works that can be done in this area of research.  相似文献   

20.
With the expansion of the size of data centers, software‐defined networking (SDN) is becoming a trend for simplifying the data center network management with central and flexible flow control. To achieve L2 abstractions in a multitenant cloud, Open vSwitch (OVS) is commonly used to build overlay tunnels (eg, Virtual eXtensible Local Area Network [VXLAN]) on top of existing underlying networks. However, the poor VXLAN performance of OVS is of huge concern. Instead of solving the performance issues of OVS, in this paper, we proposed a circuit‐based logical layer 2 bridging mechanism (CBL2), which builds label‐switched circuits and performs data‐plane multicasting in a software‐defined leaf‐spine fabric to achieve scalable L2 without overlay tunneling. Our evaluations indicate that direct transmission in OVS improves throughput performance by 58% compared with VXLAN tunneling, and data‐plane multicasting for ARP reduces address resolution latency from 149 to 0.5 ms, compared with control‐plane broadcast forwarding. The evaluation results also show that CBL2 provides 0.6, 0.4, and 11‐ms protection switching time, respectively, in the presence of switch failure, link failure, and port shutdown in practical deployment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号