首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 47 毫秒
1.
In this study, we consider an environment composed of a heterogeneous cluster of multicore-based machines used to analyze satellite data. The workload involves large data sets and is subject to a deadline constraint. Multiple applications, each represented by a directed acyclic graph (DAG), are allocated to a dedicated heterogeneous distributed computing system. Each vertex in the DAG represents a task that needs to be executed and task execution times vary substantially across machines. The goal of this research is to assign the tasks in applications to a heterogeneous multicore-based parallel system in such a way that all applications complete before a common deadline, and their completion times are robust against uncertainties in execution times. We define a measure that quantifies robustness in this environment. We design, compare, and evaluate five static resource allocation heuristics that attempt to maximize robustness. We consider six different scenarios with different ratios of computation versus communication, and loose and tight deadlines.  相似文献   

2.
3.
We investigate two distinct issues related to resource allocation heuristics: robustness and failure rate. The target system consists of a number of sensors feeding a set of heterogeneous applications continuously executing on a set of heterogeneous machines connected together by high-speed heterogeneous links. There are two quality of service (QoS) constraints that must be satisfied: the maximum end-to-end latency and minimum throughput. A failure occurs if no allocation is found that allows the system to meet its QoS constraints. The system is expected to operate in an uncertain environment where the workload, i.e., the load presented by the set of sensors, is likely to change unpredictably, possibly resulting in a QoS violation. The focus of this paper is the design of a static heuristic that: (a) determines a robust resource allocation, i.e., a resource allocation that maximizes the allowable increase in workload until a run-time reallocation of resources is required to avoid a QoS violation, and (b) has a very low failure rate (i.e., the percentage of instances a heuristic fails). Two such heuristics proposed in this study are a genetic algorithm and a simulated annealing heuristic. Both were “seeded” by the best solution found by using a set of fast greedy heuristics.  相似文献   

4.
Energy-efficient resource allocation within clusters and data centers is important because of the growing cost of energy. We study the problem of energy-constrained dynamic allocation of tasks to a heterogeneous cluster computing environment. Our goal is to complete as many tasks by their individual deadlines and within the system energy constraint as possible given that task execution times are uncertain and the system is oversubscribed at times. We use Dynamic Voltage and Frequency Scaling (DVFS) to balance the energy consumption and execution time of each task. We design and evaluate (via simulation) a set of heuristics and filtering mechanisms for making allocations in our system. We show that the appropriate choice of filtering mechanisms improves performance more than the choice of heuristic (among the heuristics we tested).  相似文献   

5.
在网格环境下,资源状况和用户行为相当复杂,是一个异构计算环境,元任务(meta—task)调度比传统并行调度更为复杂。如何映射一组任务到一组机器上被证明是NP问题,其目的一般是最小化任务完成时间(makespan)。为解决这一问题,已经提出一些启发式任务调度算法,例如具有代表性的MinMin元任务调度算法。本文在Min-Min元任务调度算法的基础上,通过虚拟截止时间制导的方法来改进Min-Min算法。实验结果表明,本文提出的算法具有更短的任务完成时间。  相似文献   

6.
提出一种云平台下满足任务截止时间的资源分配策略。根据云平台的实际情况构造一个2层的资源分配模型,采用改进的银行家算法进行资源分配,在满足任务截止期限的前提下使任务的花费最小。在CloudSim环境下进行仿真实验,结果表明,使用该策略能满足任务截止时间、减少任务费用并提高系统性能。  相似文献   

7.
Stochastic robustness metric and its use for static resource allocations   总被引:2,自引:0,他引:2  
This research investigates the problem of robust static resource allocation for distributed computing systems operating under imposed Quality of Service (QoS) constraints. Often, such systems are expected to function in a physical environment replete with uncertainty, which causes the amount of processing required to fluctuate substantially over time. Determining a resource allocation that accounts for this uncertainty in a way that can provide a probabilistic guarantee that a given level of QoS is achieved is an important research problem. The stochastic robustness metric proposed in this research is based on a mathematical model where the relationship between uncertainty in system parameters and its impact on system performance are described stochastically.The utility of the established metric is then exploited in the design of optimization techniques based on greedy and iterative approaches that address the problem of resource allocation in a large class of distributed systems operating on periodically updated data sets. The performance results are presented for a simulated environment that replicates a heterogeneous cluster-based radar data processing center. A mathematical performance lower bound is presented for comparison analysis of the heuristic results. The lower bound is derived based on a relaxation of the Integer Linear Programming formulation for a given resource allocation problem.  相似文献   

8.
In this paper, we study the problem of scheduling tasks on a distributed system, with the aim to simultaneously minimize energy consumption and makespan subject to the deadline constraints and the tasks’ memory requirements. A total of eight heuristics are introduced to solve the task scheduling problem. The set of heuristics include six greedy algorithms and two naturally inspired genetic algorithms. The heuristics are extensively simulated and compared using an simulation test-bed that utilizes a wide range of task heterogeneity and a variety of problem sizes. When evaluating the heuristics, we analyze the energy consumption, makespan, and execution time of each heuristic. The main benefit of this study is to allow readers to select an appropriate heuristic for a given scenario.  相似文献   

9.
Clusters of computers have emerged as mainstream parallel and distributed platforms for high‐performance, high‐throughput and high‐availability computing. To enable effective resource management on clusters, numerous cluster management systems and schedulers have been designed. However, their focus has essentially been on maximizing CPU performance, but not on improving the value of utility delivered to the user and quality of services. This paper presents a new computational economy driven scheduling system called Libra, which has been designed to support allocation of resources based on the users' quality of service requirements. It is intended to work as an add‐on to the existing queuing and resource management system. The first version has been implemented as a plugin scheduler to the Portable Batch System. The scheduler offers market‐based economy driven service for managing batch jobs on clusters by scheduling CPU time according to user‐perceived value (utility), determined by their budget and deadline rather than system performance considerations. The Libra scheduler has been simulated using the GridSim toolkit to carry out a detailed performance analysis. Results show that the deadline and budget based proportional resource allocation strategy improves the utility of the system and user satisfaction as compared with system‐centric scheduling strategies. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

10.
Heterogeneous computing (HC) environments composed of interconnected machines with varied computational capabilities are well suited to meet the computational demands of large, diverse groups of tasks. One aspect of resource allocation in HC environments is matching tasks with machines and scheduling task execution on the assigned machines. We will refer to this matching and scheduling process as mapping. The problem of mapping these tasks onto the machines of a distributed HC environment has been shown, in general, to be NP-complete. Therefore, the development of heuristic techniques to find near-optimal solutions is required. In the HC environment investigated, tasks have deadlines, priorities, multiple versions, and may be composed of communicating subtasks. The best static (off-line) techniques from some previous studies are adapted and applied to this mapping problem: a genetic algorithm (GA), a GENITOR-style algorithm, and a two phase greedy technique based on the concept of Min–min heuristics. Simulation studies compare the performance of these heuristics in several overloaded scenarios, i.e., not all tasks can be executed by their deadlines. The performance measure used is the sum of weighted priorities of tasks that completed before their deadline, adjusted based on the version of the task used. It is shown that for the cases studied here, the GENITOR technique finds the best results, but the faster two phase greedy approach also performs very well.  相似文献   

11.
在移动互联环境下,依据用户行为规律对业务兴趣相似用户进行分群,可为业务准确推荐和资源有效配置提供有力支撑。因此,提出一种基于改进模糊聚类理论的用户分群算法。首先,分别定义业务兴趣相似度和业务顺序相似度,进而建立用户综合相似度指标。其次,构建基于用户综合相似度的模糊聚类模型,进而采用网格划分方法确定初始群组中心并依据平均用户隶属度调整用户群组个数,从而实现快速准确的用户分群。仿真结果验证了该算法的有效性。  相似文献   

12.
Utilization Bounds for EDF Scheduling on Real-Time Multiprocessor Systems   总被引:1,自引:3,他引:1  
The utilization bound for earliest deadline first (EDF) scheduling is extended from uniprocessors to homogeneous multiprocessor systems with partitioning strategies. First results are provided for a basic task model, which includes periodic and independent tasks with deadlines equal to periods. Since the multiprocessor utilization bounds depend on the allocation algorithm, different allocation algorithms have been considered, ranging from simple heuristics to optimal allocation algorithms. As multiprocessor utilization bounds for EDF scheduling depend strongly on task sizes, all these bounds have been obtained as a function of a parameter which takes task sizes into account. Theoretically, the utilization bounds for multiprocessor EDF scheduling can be considered a partial solution to the bin-packing problem, which is known to be NP-complete. The basic task model is extended to include resource sharing, release jitter, deadlines less than periods, aperiodic tasks, non-preemptive sections, context switches, and mode changes.  相似文献   

13.
In recent years, multimedia cloud computing is becoming a promising technology that can effectively process multimedia services and provide quality of service (QoS) provisioning for multimedia applications from anywhere, at any time and on any device at lower costs. However, there are two major challenges exist in this emerging computing paradigm: one is task management, which maps multimedia tasks to virtual machines, and the other is resource management, which maps virtual machines (VMs) to physical servers. In this study, we aim at providing an efficient solution that jointly addresses these challenges. In particular, a queuing based approach for task management and a heuristic algorithm for resource management are proposed. By adopting allocation deadline in each VM request, both task manager and VM allocator receive better chances to optimize the cost while satisfying the constraints on the quality of multimedia service. Various simulations were conducted to validate the efficiency of the proposed task and resource management approaches. The results showed that the proposed solutions provided better performance as compared to the existing state-of-the-art approaches.  相似文献   

14.
Cluster scheduling, where processors are grouped into clusters and the tasks that are allocated to one cluster are scheduled by a global scheduler, has attracted attention in multiprocessor real-time systems research recently. In this paper, assuming that an optimal global scheduler is adopted within each cluster, we investigate the worst-case utilization bounds for cluster scheduling with different task allocation/partitioning heuristics. First, we develop a lower limit on the utilization bounds for cluster scheduling with any reasonable task allocation scheme. Then, the lower limit is shown to be the exact utilization bound for cluster scheduling with the worst-fit task allocation scheme. For other task allocation heuristics (such as first-fit, best-fit, first-fit decreasing, best-fit decreasing and worst-fit decreasing), higher utilization bounds are derived for systems with both homogeneous clusters (where each cluster has the same number of processors) and heterogeneous clusters (where clusters have different number of processors). In addition, focusing on an efficient optimal global scheduler, namely the boundary-fair (Bfair) algorithm, we propose a period-aware task allocation heuristic with the goal of reducing the scheduling overhead (e.g., the number of scheduling points, context switches and task migrations). Simulation results indicate that the percentage of task sets that can be scheduled is significantly improved under cluster scheduling even for small-size clusters, compared to that of the partitioned scheduling. Moreover, when comparing to the simple generic task allocation scheme (e.g., first-fit), the proposed period-aware task allocation heuristic markedly reduces the scheduling overhead of cluster scheduling with the Bfair scheduler.  相似文献   

15.
The goal of service differentiation is to provide different service quality levels to meet changing system configuration and resource availability and to satisfy different requirements and expectations of applications and users. In this paper, we investigate the problem of quantitative service differentiation on cluster-based delay-sensitive servers. The goal is to support a system-wide service quality optimization with respect to resource allocation on a computer system while provisioning proportionality fairness to clients. We first propose and promote a square-root proportional differentiation model. Interestingly, both popular delay factors, queueing delay and slowdown, are reciprocally proportional to the allocated resource usage. We formulate the problem of quantitative service differentiation as a generalized resource allocation optimization towards the minimization of system delay, defined as the sum of weighted delay of client requests. We prove that the optimization-based resource allocation scheme essentially provides square-root proportional service differentiation to clients. We then study the problem of service differentiation provisioning from an important relative performance metric, slowdown. We give a closed-form expression of the expected slowdown of a popular heavy-tailed workload model with respect to resource allocation on a server cluster. We design a two-tier resource management framework, which integrates a dispatcher-based node partitioning scheme and a server-based adaptive process allocation scheme. We evaluate the resource allocation framework with different models via extensive simulations. Results show that the square-root proportional model provides service differentiation at a minimum cost of system delay. The two-tier resource allocation framework can provide fine-grained and predictable service differentiation on cluster-based servers.  相似文献   

16.
如何在动态性极强的网格环境中有效调度工作流应用并满足用户的QoS需求是一个难题.传统的基于资源静态特征的启发式调度算法或预留策略缺乏对资源动态服务能力的有效评估而无法保证工作流应用的截止时间约束.本文采用随机服务模型建模网格资源的动态性能并考虑资源内处理单元失效的情况.利用生灭过程描述资源节点中处理单元数目的变化情况并给出了资源节点在任务截止时间内的可靠性评估方法.在此基础上,提出一种可靠性增强的网格工作流调度算法RSA_TC.实验结果表明RSA_TC算法相对于DSESAW和PFAS算法,能有效保证用户截止时间的要求,对动态网格环境有较好的自适应性.  相似文献   

17.
This paper focuses on task allocation with single-task robots, multi-robot tasks and instantaneous assignment, which has been shown to be strongly NP-hard. Although this problem has been studied extensively, few efficient approximation algorithms have been provided due to its inherent complexity. In this paper, we first provide discussions and analyses for two natural greedy heuristics for solving this problem. Then, a new greedy heuristic is introduced, which considers inter-task resource constraints to approximate the influence between different assignments in task allocation. Instead of only looking at the utility of the assignment, our approach computes the expected loss of utility (due to the assigned robots and task) as an offset and uses the offset utility for making the greedy choice. A formal analysis is provided for the new heuristic, which reveals that the solution quality is bounded by two different factors. A new algorithm is then provided to approximate the new heuristic for performance improvement. Finally, for more complicated applications, we extend this problem to include general task dependencies and provide a result on the hardness of approximating this new formulation. Comparison results with the two natural heuristics in simulation are provided for both formulations, which show that the new approach achieves improved performance.  相似文献   

18.
Multi-core technologies are widely used in embedded systems and the resource allocation is vita to guarantee Quality of Service (QoS) requirements for applications on multi-core platforms. For heterogeneous multi-core systems, the statistical characteristics of execution times on different cores play a critical role in the resource allocation, and the differences between the actual execution time and the estimated execution time may significantly affect the performance of resource allocation and cause system to be less robust. In this paper, we present an evaluation method to study the impacts of inaccurate execution time information to the performance of resource allocation. We propose a systematic way to measure the robustness degradation of the system and evaluate how inaccurate probability parameters may affect the performance of resource allocations. Furthermore, we compare the performance of three widely used greedy heuristics when using the inaccurate information with simulations.  相似文献   

19.
CoreOS是基于Docker的新型容器化集群服务器操作系统,发展迅速,已经得到OpenStack、Kubernetes、Salesforce、Ebay等主流云服务商的支持,云环境中负载是动态的,相应的其资源需求是动态变化的,这给集群资源高效利用带来了挑战,静态预分配峰值资源的策略带来云端资源的巨大浪费,同时空转的计算浪费大量能耗.本文提出的面向负载整合的集群调度系统(简称LICSS)实时监控集群负载分布情况,调度时使用紧凑式调度策略分配计算节点,运行时利用任务迁移技术对负载进行动态整合,实现及时收集释放空转资源降低资源能耗浪费的目的.LICSS系统设计实现了节点负载度量、任务度量、负载整合算法,并测算出节点自适应负载阈值.实验表明,LICSS系统能够根据不同时段集群负载动态变化情况对负载进行有效整合,提高了12.2%的平均资源利用率,并且基于任务整合在低负载时段触发富余节点休眠降低集群能耗.  相似文献   

20.
Efficient resource allocation is a complex and dynamic task in business process management. Although a wide variety of mechanisms are emerging to support resource allocation in business process execution, these approaches do not consider performance optimization. This paper introduces a mechanism in which the resource allocation optimization problem is modeled as Markov decision processes and solved using reinforcement learning. The proposed mechanism observes its environment to learn appropriate policies which optimize resource allocation in business process execution. The experimental results indicate that the proposed approach outperforms well known heuristic or hand-coded strategies, and may improve the current state of business process management.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号