首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Providing a pool of various resources and services to customers on the Internet in exchanging money has made cloud computing as one of the most popular technologies. Management of the provided resources and services at the lowest cost and maximum profit is a crucial issue for cloud providers. Thus, cloud providers proceed to auto-scale the computing resources according to the users' requests in order to minimize the operational costs. Therefore, the required time and costs to scale-up and down computing resources are considered as one of the major limits of scaling which has made this issue an important challenge in cloud computing. In this paper, a new approach is proposed based on MAPE-K loop to auto-scale the resources for multilayered cloud applications. K-nearest neighbor (K-NN) algorithm is used to analyze and label virtual machines and statistical methods are used to make scaling decision. In addition, a resource allocation algorithm is proposed to allocate requests on the resources. Results of the simulation revealed that the proposed approach results in operational costs reduction, as well as improving the resource utilization, response time, and profit.  相似文献   

2.
Cloud computing uses scheduling and load balancing for virtualized file sharing in cloud infrastructure. These two have to be performed in an optimized manner in cloud computing environment to achieve optimal file sharing. Recently, Scalable traffic management has been developed in cloud data centers for traffic load balancing and quality of service provisioning. However, latency reducing during multidimensional resource allocation still remains a challenge. Hence, there necessitates efficient resource scheduling for ensuring load optimization in cloud. The objective of this work is to introduce an integrated resource scheduling and load balancing algorithm for efficient cloud service provisioning. The method constructs a Fuzzy-based Multidimensional Resource Scheduling model to obtain resource scheduling efficiency in cloud infrastructure. Increasing utilization of Virtual Machines through effective and fair load balancing is then achieved by dynamically selecting a request from a class using Multidimensional Queuing Load Optimization algorithm. A load balancing algorithm is then implemented to avoid underutilization and overutilization of resources, improving latency time for each class of request. Simulations were conducted to evaluate the effectiveness using Cloudsim simulator in cloud data centers and results shows that the proposed method achieves better performance in terms of average success rate, resource scheduling efficiency and response time. Simulation analysis shows that the method improves the resource scheduling efficiency by 7% and also reduces the response time by 35.5 % when compared to the state-of-the-art works.  相似文献   

3.
The Journal of Supercomputing - Usually, a large number of concurrent bag-of-tasks (BoTs) application execution requests are submitted to cloud data centers (CDCs), which needs to be optimally...  相似文献   

4.
5.
Software development nowadays involves several levels of abstraction: starting from the programming of single objects, to their combination into components, to their publication as services and the overall architecture linking elements at each level. As a result, software engineering is dealing with a wider range of artifacts and concepts (i.e., in the context of this paper: services and business processes) than ever before. In this paper we explore the importance of having an adequate engine for executing business processes written as compositions of Web services. The paper shows that, independently of the composition language used, the overall scalability of the system is determined by how the run-time engine treats the process execution. This is particularly relevant at the service level because publishing a process through a Web service interface makes it accessible to an unpredictable and potentially very large number of clients. As a consequence, the process developer is confronted with the difficult question of resource provisioning. Determining the optimal configuration of the distributed engine that runs the process becomes sensitive both to the actual number of clients and to the kinds of processes to be executed. The main contribution of the paper is to show how resource provisioning for software business processes can be solved using autonomic computing techniques. The engine separates execution in two stages (navigation and dispatching) and uses a controller to allocate the node of a cluster of computers to each one of those stages as the workload changes. The controller can be configured with different policies that define how to reconfigure the system. To prove the feasibility of the concept, we have implemented the autonomic controller and evaluated its performance with an extensive set of experiments.  相似文献   

6.
如何将用户的海量数据以最小的耗时存储到数据中心,是提高云存储效益,解决其发展瓶颈所需考虑的关键问题本文首先证明了云存储环境下资源调度方案的存储最小耗时问题属于一个NPC问题,再针对现有算法对存储调度因素考虑不全面、调度结果易陷入局部最优等问题,提出了一种全新的资源调度算法,该算法首先利用三角模糊数层次分析法全面分析调度影响因素,得到存储节点的判断矩阵,用于构造后续的遗传算法目标函数,再将简单遗传算法从解的编码、交叉变异操作及致死染色体自我改善等角度进行创新,使其适用于云存储环境下的大规模资源调度,最后与OpenStack中的Cinder块存储算法及现有改进算法进行了分析比对,实验结果验证了本文所提算法的有效性,实现了更加高效的资源调度。  相似文献   

7.
Resource provisioning strategies are crucial for workflow scheduling problems which are widespread in cloud computing. The main challenge lies in determining the amounts of reserved and on-demand resources to meet users’ requirements. In this paper, we consider the cloud workflow scheduling problem with hybrid resource provisioning to minimize the total renting cost, which is NP-hard and has not been studied yet. An iterative population-based meta-heuristic is developed. According to the shift vectors obtained during the search procedure, timetables are computed quickly. The appropriate amounts of reserved and on-demand resources are determined by an incremental optimization method. The utilization of each resource is balanced in a swaying way, in terms of which the probabilistic matrix is updated for the next iteration. The proposed algorithm is compared with modified existing algorithms for similar problems. Experimental results demonstrate effectiveness and efficiency of the proposed algorithm.  相似文献   

8.
As Grid computing has emerged as a technology for providing the computational resources to industries and scientific projects, new requirements arise. Nowadays, resource management has become an important research area in the Grid computing environment. To provision the appropriate resource to a corresponding application is a tedious task. So, it is important to check and verify the provisioning of the resource before the application’s execution. In this paper, a resource provisioning framework has been presented that offers a resource provisioning policy, which caters to provisioned resource allocation and resource scheduling. The framework has been formally specified and verified. Formal specification and verification of the framework helps in predicting possible errors before the scheduling process itself, and thus results in efficient resource provisioning and scheduling of Grid resources.  相似文献   

9.

Purpose

The objective of this study is to optimize task scheduling and resource allocation using an improved differential evolution algorithm (IDEA) based on the proposed cost and time models on cloud computing environment.

Methods

The proposed IDEA combines the Taguchi method and a differential evolution algorithm (DEA). The DEA has a powerful global exploration capability on macro-space and uses fewer control parameters. The systematic reasoning ability of the Taguchi method is used to exploit the better individuals on micro-space to be potential offspring. Therefore, the proposed IDEA is well enhanced and balanced on exploration and exploitation. The proposed cost model includes the processing and receiving cost. In addition, the time model incorporates receiving, processing, and waiting time. The multi-objective optimization approach, which is the non-dominated sorting technique, not with normalized single-objective method, is applied to find the Pareto front of total cost and makespan.

Results

In the five-task five-resource problem, the mean coverage ratios C(IDEA, DEA) of 0.368 and C(IDEA, NSGA-II) of 0.3 are superior to the ratios C(DEA, IDEA) of 0.249 and C(NSGA-II, IDEA) of 0.288, respectively. In the ten-task ten-resource problem, the mean coverage ratios C(IDEA, DEA) of 0.506 and C(IDEA, NSGA-II) of 0.701 are superior to the ratios C(DEA, IDEA) of 0.286 and C(NSGA-II, IDEA) of 0.052, respectively. Wilcoxon matched-pairs signed-rank test confirms there is a significant difference between IDEA and the other methods. In summary, the above experimental results confirm that the IDEA outperforms both the DEA and NSGA-II in finding the better Pareto-optimal solutions.

Conclusions

In the study, the IDEA shows its effectiveness to optimize task scheduling and resource allocation compared with both the DEA and the NSGA-II. Moreover, for decision makers, the Gantt charts of task scheduling in terms of having smaller makespan, cost, and both can be selected to make their decision when conflicting objectives are present.  相似文献   

10.
蚁群算法在优化组合问题中有着重要的意义,传统的蚁群调度算法搜索速度慢、容易陷入局部最优。针对这种情况,结合布谷鸟搜索算法,提出一种基于蚁群算法与布谷鸟搜索算法的混合算法(ACOCS),用于云环境下的资源调度。该方法有效保留了蚁群算法求解精度高和鲁棒性的特性,并融入了布谷鸟搜索具有快速全局搜索能力的优势。仿真实验结果表明,提出的ACOCS调度算法有效减少了调度所需的响应时间,也在一定程度上提高了系统资源利用率。  相似文献   

11.
The Journal of Supercomputing - Fog-integrated cloud (FiC) contains a fair amount of heterogeneity, leading to uncertainty in the resource provisioning. An admission control manager (ACM) is...  相似文献   

12.
Workflow Scheduling in Mobile Edge Computing (MEC) tries to allocate the best possible set of resources for the workflows, considering objectives such as deadline, cost, energy, Quality of Service (QoS), and so on. However, MEC may be under different workloads from the IoT and this may not have the required amount of resources to efficiently handle the workflows. To mitigate this problem, in this paper, we use proactive resource provisioning and workload prediction methods. For this purpose, we present a workload prediction method using a multilayer feed-forward Artificial Neural Network (ANN) model and apply its results for resource provisioning. Afterward, we present an opposition-based version of the Marine-Predator Algorithm (MPA) algorithm, denoted as OMPA. In this algorithm, we present a probabilistic opposition-based learning (OBL) method, which benefits from the OBL, Quasi OBL, and dynamic OBL methods. Afterward, the OMPA algorithm is used for training the multi-layer feed-forward ANN model and multi-workflow scheduling by taking into account factors such as the makespan and number of Virtual Machines (VMs). Extensive experiments conducted in the iFogSim simulator and on the NASA and Saskatchewan datasets indicate that the proposed scheme can achieve better results compared to other metaheuristic algorithms and scheduling schemes.  相似文献   

13.
Resource scheduling in large-scale distributed systems, such as grids and clouds, is difficult due to the size, dynamism, and volatility of resources. These resources are eclectic and autonomous, and may exhibit different usage policies, levels of participation, capabilities, local load, and reliability. Moreover, applications are likely to exhibit various patterns and levels, and distributed resources may organize into various different overlay topologies for information and query dissemination. Researchers have proposed a wide variety of approaches and policies for mapping offered load onto resources and for solving the various component parts of the scheduling problem. However, production clouds and grids may be underutilized, and may not exhibit the load to effectively characterize all of the scheduling system inputs. The composition of large-scale systems is also changing, potentially to include more individual and peer-to-peer resources. These factors will influence the effectiveness of proposed scheduling solutions. Therefore, a simulation environment is necessary to study different approaches under different scenarios, especially those that are expected, but that are not currently characteristic of existing systems. This article describes a general-purpose peer-to-peer simulation environment that allows a wide variety of parameters, protocols, strategies and policies to be varied and studied. To provide a proof of concept, utilization of the simulation environment is presented in a large-scale distributed system problem that includes a core model and related mechanisms. In particular, this article presents a definition and possible peer-to-peer solutions for the large-scale scheduling problem. Moreover, this article describes a general simulation model, some policies that can be varied, an implementation, and some sample results.  相似文献   

14.
对云计算环境下的资源调度问题进行了研究。针对云计算环境下资源调度的特点,结合节点失效以及任务间的网状结构特点,建立资源调度问题数学模型,运用离散粒子群算法求解该问题。针对模型特点结合云计算服务运营实情,设计算例进行仿真测试。验证表明了所建立模型的合理性及该算法求解的可行性和有效性。  相似文献   

15.
左利云  左利锋 《计算机应用》2012,32(7):1916-1919
针对云计算环境的复杂性和云资源的不确定性,提出多目标集成蚁群优化调度算法。采用熵度量云资源的不确定性,进行信息素全局更新,以提高算法收敛速度;将Min-min算法得出的任务预期最小完成时间作为启发信息,以实现最小调度时间;在信息素局部更新时加入负载系数,根据当前负载情况调节信息素,满足负载均衡需求,同时在更新时考虑信息素扩散因素,不仅计算当前节点还考虑周遭节点信息素情况,可增强蚂蚁间协作,提高最优解的性能。改进后算法比原始蚁群算法降低了算法复杂度,提高了最优解精度。云仿真系统实验测试表明改进算法在调度时间、负载均衡等方面表现均优于其他算法。  相似文献   

16.
带钢热轧具有特殊的生产工艺约束, 其生产流程的编制是钢铁企业生产的关键, 因此提出采用并行策略的基于多旅行商问题(MTSP)热轧轧制模型. 该模型不但考虑了板坯在宽度、厚度和硬度跳变时的约束, 还考虑了同一轧制单元内轧制板坯数量的约束. 并设计了新的Meta-heuristics算法求解此模型. 通过对某热轧带钢厂生产数据的仿真实验,表明模型和算法能有效地给出满意的排产结果, 并且具有较高的执行效率.  相似文献   

17.
Mobile cloud computing is a dynamic, virtually scalable and network based computing environment where mobile device acts as a thin client and applications run on remote cloud servers. Mobile cloud computing resources required by different users depend on their respective personalized applications. Therefore, efficient resource provisioning in mobile clouds is an important aspect that needs special attention in order to make the mobile cloud computing a highly optimized entity. This paper proposes an adaptive model for efficient resource provisioning in mobile clouds by predicting and storing resource usages in a two dimensional matrix termed as resource provisioning matrix. These resource provisioning matrices are further used by an independent authority to predict future required resources using artificial neural network. Independent authority also checks and verifies resource usage bill computed by cloud service provider using resource provisioning matrices. It provides cost computation reliability for mobile customers in mobile cloud environment. Proposed model is implemented on Hadoop using three different applications. Results indicate that proposed model provides better mobile cloud resources utilization as well as maintains quality of service for mobile customer. Proposed model increases battery life of mobile device and decreases data usage cost for mobile customer.  相似文献   

18.
针对传统遗传算法无法满足多用户下的大规模云计算环境下的资源调度问题,提出利用改进遗传算法结合二次编码的方法解决大规模资源调度。首先,在选择复制阶段,采用基于最小任务完成时间和匹配程度的双适应度函数,对种群以双重标准进行筛选。然后,对算法的交叉变异概率进行了自适应优化,使其自适应能力进一步提高,保证了算法尽快向最优解收敛。同时引入的收敛终止条件保证了算法尽快跳出循环。最后,在CloudSim平台上对改进遗传算法(IGA)进行了分析,实验结果表明,提出的改进遗传算法能够很好地适用于大规模资源调度,且结果优于其他几种较新的对比算法。  相似文献   

19.
The complexity of computing systems introduces a few issues and challenges such as poor performance and high energy consumption. In this paper, we first define and model resource contention metric for high performance computing workloads as a performance metric in scheduling algorithms and systems at the highest level of resource management stack to address the main issues in computing systems. Second, we propose a novel autonomic resource contention‐aware scheduling approach architected on various layers of the resource management stack. We establish the relationship between distributed resource management layers in order to optimize resource contention metric. The simulation results confirm the novelty of our approach.Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
This paper considers a truck scheduling problem in a multiple cross docks while there is temporary storage in front of the shipping docks. Receiving and shipping trucks can intermittently move in and out of the docks during the time intervals between their task execution, in which trucks can enter to any of the cross docks. Thus, a mixed-integer programming (MIP) model for multiple cross docks scheduling is developed inspired by models in the body of the respective literature. Its objective is to minimize the total operation time or maximize the throughput of the cross-docking system. Moreover, additional concepts considered in the new method is multiple cross docks with a limited capacity. In this study, there are two types of delay times. The first type occurs when there is a shipping truck change and the second one occurs when the current shipping truck does not load any product from a certain receiving truck or temporary storage and waits until its needed products arrive at the shipping docks. To solve the developed model, two meta-heuristics, namely simulated annealing (SA) and firefly algorithms (FA), are proposed. In addition, a procedure for trucks scheduling in a state of a constant discrete firefly algorithm for the discrete adaptation has been proposed. The experimental design is carried out to tune the parameters of algorithms. Finally, the solutions obtained by the proposed SA and FA are compared.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号