首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
With the increasingly growing amount of service requests from the world‐wide customers, the cloud systems are capable of providing services while meeting the customers' satisfaction. Recently, to achieve the better reliability and performance, the cloud systems have been largely depending on the geographically distributed data centers. Nevertheless, the dollar cost of service placement by service providers (SP) differ from the multiple regions. Accordingly, it is crucial to design a request dispatching and resource allocation algorithm to maximize net profit. The existing algorithms are either built upon energy‐efficient schemes alone, or multi‐type requests and customer satisfaction oblivious. They cannot be applied to multi‐type requests and customer satisfaction‐aware algorithm design with the objective of maximizing net profit. This paper proposes an ant‐colony optimization‐based algorithm for maximizing SP's net profit (AMP) on geographically distributed data centers with the consideration of customer satisfaction. First, using model of customer satisfaction, we formulate the utility (or net profit) maximization issue as an optimization problem under the constraints of customer satisfaction and data centers. Second, we analyze the complexity of the optimal requests dispatchment problem and rigidly prove that it is an NP‐complete problem. Third, to evaluate the proposed algorithm, we have conducted the comprehensive simulation and compared with the other state‐of‐the‐art algorithms. Also, we extend our work to consider the data center's power usage effectiveness. It has been shown that AMP maximizes SP net profit by dispatching service requests to the proper data centers and generating the appropriate amount of virtual machines to meet customer satisfaction. Moreover, we also demonstrate the effectiveness of our approach when it accommodates the impacts of dynamically arrived heavy workload, various evaporation rate and consideration of power usage effectiveness. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
针对采用MapReduce模型的大数据分析作业的调度问题进行深入研究,并分析现有任务调度算法的缺陷,现有算法没有考虑资源分配对于作业截止时间的影响,也未考虑不同类型作业截止时间的敏感性问题。因作业的完成时间随着分配资源的不同而改变,故称之为弹性作业,截止时间敏感性是指不同类型作业对截止时间要求的严格程度不同。针对以上问题,提出一种截止时间感知的弹性作业调度算法(DA)。该算法将作业依据截止时间敏感程度进行分类,在基于作业整体执行时间预测的基础上,通过调控不同的资源分配策略来改变作业完成时间,同时结合用户对于截止时间的需求及作业预执行的收益来提前规划作业的资源分配及调度次序使得整体收益最大化。将算法在仿真拥有210个物理节点的集群中进行实验,实验表明该算法满足了截止时间的限制并使得作业整体收益值平均提高了2.37倍。  相似文献   

3.
In this paper, a heuristic dynamic scheduling scheme for parallel real-time jobs executing on a heterogeneous cluster is presented. In our system model, parallel real-time jobs, which are modeled by directed acyclic graphs, arrive at a heterogeneous cluster following a Poisson process. A job is said to be feasible if all its tasks meet their respective deadlines. The scheduling algorithm proposed in this paper takes reliability measures into account, thereby enhancing the reliability of heterogeneous clusters without any additional hardware cost. To make scheduling results more realistic and precise, we incorporate scheduling and dispatching times into the proposed scheduling approach. An admission control mechanism is in place so that parallel real-time jobs whose deadlines cannot be guaranteed are rejected by the system. For experimental performance study, we have considered a real world application as well as synthetic workloads. Simulation results show that compared with existing scheduling algorithms in the literature, our scheduling algorithm reduces reliability cost by up to 71.4% (with an average of 63.7%) while improving schedulability over a spectrum of workload and system parameters. Furthermore, results suggest that shortening scheduling times leads to a higher guarantee ratio. Hence, if parallel scheduling algorithms are applied to shorten scheduling times, the performance of heterogeneous clusters will be further enhanced.  相似文献   

4.
Aggressive scaling in technology size has dramatically increased the power density and degraded the reliability of real-time embedded systems. In this paper, we study the problem of reliability-conscious energy minimization for scheduling fixed-priority real-time embedded systems with weakly hard QoS-constraint. The weakly hard QoS-constraint is modeled with (m, k)-constraint, which requires that at least m out of any k consecutive jobs of a task meet their deadlines. We first propose a technique that can balance the static and dynamic energy consumption for real-time jobs with better speed determination than the classical strategies during their feasible intervals. Then based on it, we propose an adaptive fixed-priority scheduling scheme to reduce the energy consumption for the system while preserving its reliability. Through extensive simulations, our experiment results demonstrate that the proposed techniques can significantly outperform the previous research in energy performance while satisfying the weakly hard QoS-constraint under the reliability requirement.  相似文献   

5.
针对云计算数据中心的能耗问题,提出了绿色云计算体系理论,设计了绿色云系统架构;基于该架构,将能量作为一种系统资源进行分配,提出了三种绿色任务调度算法分别是STF-OS、LTF-OS和RT-OS算法;对三种绿色任务调度算法可行性做了相关的理论分析,三种算法可以有效地减少能源消耗;通过扩展云计算仿真平台CloudSim实现了模拟实验,结果表明STF-OS算法降低数据中心能耗的能力最优。  相似文献   

6.
Cloud computing is a form of distributed computing, which promises to deliver reliable services through next‐generation data centers that are built on virtualized compute and storage technologies. It is becoming truly ubiquitous and with cloud infrastructures becoming essential components for providing Internet services, there is an increase in energy‐hungry data centers deployed by cloud providers. As cloud providers often rely on large data centers to offer the resources required by the users, the energy consumed by cloud infrastructures has become a key environmental and economical concern. Much energy is wasted in these data centers because of under‐utilized resources hence contributing to global warming. To conserve energy, these under‐utilized resources need to be efficiently utilized and to achieve this, jobs need to be allocated to the cloud resources in such a way so that the resources are used efficiently and there is a gain in performance and energy efficiency. In this paper, a model for energy‐aware resource utilization technique has been proposed to efficiently manage cloud resources and enhance their utilization. It further helps in reducing the energy consumption of clouds by using server consolidation through virtualization without degrading the performance of users’ applications. An artificial bee colony based energy‐aware resource utilization technique corresponding to the model has been designed to allocate jobs to the resources in a cloud environment. The performance of the proposed algorithm has been evaluated with the existing algorithms through the CloudSim toolkit. The experimental results demonstrate that the proposed technique outperforms the existing techniques by minimizing energy consumption and execution time of applications submitted to the cloud. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
Job scheduling in computational grid is a complex problem and various heuristics and meta-heuristics have been proposed for the same. These approaches usually optimize specific characteristic parameters while allocating the jobs on the grid resources. Many a times, it is desired to optimize multiple parameters during job scheduling. Non-dominated sorting genetic algorithm (NSGA-II) has been observed to be the best meta-heuristic to solve such multi-objective optimization problem. The proposed work applies NSGA-II for job scheduling in computational grid with three conflicting objectives: maximizing reliability of the system for job allocation, minimizing energy consumption and balancing the load on the system. Performance study of the proposed model is done by simulating it on some real data. The result indicates that the proposed model performs well with multiple objectives.  相似文献   

8.
Job scheduling in data centers can be considered from a cyber–physical point of view, as it affects the data center’s computing performance (i.e. the cyber aspect) and energy efficiency (the physical aspect). Driven by the growing needs to green contemporary data centers, this paper uses recent technological advances in data center virtualization and proposes cyber–physical, spatio-temporal (i.e. start time and servers assigned), thermal-aware job scheduling algorithms that minimize the energy consumption of the data center under performance constraints (i.e. deadlines). Savings are possible by being able to temporally “spread” the workload, assign it to energy-efficient computing equipment, and further reduce the heat recirculation and therefore the load on the cooling systems. This paper provides three categories of thermal-aware energy-saving scheduling techniques: (a) FCFS-Backfill-XInt and FCFS-Backfill-LRH, thermal-aware job placement enhancements to the popular first-come first-serve with back-filling (FCFS-backfill) scheduling policy; (b) EDF-LRH, an online earliest deadline first scheduling algorithm with thermal-aware placement; and (c) an offline genetic algorithm for SCheduling to minimize thermal cross-INTerference (SCINT), which is suited for batch scheduling of backlogs. Simulation results, based on real job logs from the ASU Fulton HPC data center, show that the thermal-aware enhancements to FCFS-backfill achieve up to 25% savings compared to FCFS-backfill with first-fit placement, depending on the intensity of the incoming workload, while SCINT achieves up to 60% savings. The performance of EDF-LRH nears that of the offline SCINT for low loads, and it degrades to the performance of FCFS-backfill for high loads. However, EDF-LRH requires milliseconds of operation, which is significantly faster than SCINT, the latter requiring up to hours of runtime depending upon the number and size of submitted jobs. Similarly, FCFS-Backfill-LRH is much faster than FCFS-Backfill-XInt, but it achieves only part of FCFS-Backfill-XInt’s savings.  相似文献   

9.
在商业网格计算环境中,作业有预算和截止期限制。如何向消费者提供有质量保障的服务,同时考虑服务提供者的利益,是一个关键问题。现有的作业调度算法只从消费者的角度出发对作业完成的时间和成本进行优化。同时从消费者和服务者的角度,利用作业的属性定义了作业的价值密度,在此基础上提出了高价值密度优先的网格作业调度算法HVDF。仿真结果表明,HVDF算法在实现价值率和按时完成作业数两个性能指标上优于现有算法。  相似文献   

10.
In this paper we investigate dynamic speed scaling, a technique to reduce energy consumption in variable-speed microprocessors. While prior research has focused mostly on single processor environments, in this paper we investigate multiprocessor settings. We study the basic problem of scheduling a set of jobs, each specified by a release date, a deadline and a processing volume, on variable-speed processors so as to minimize the total energy consumption. We first settle the problem complexity if unit size jobs have to be scheduled. More specifically, we devise a polynomial time algorithm for jobs with agreeable deadlines and prove NP-hardness results if jobs have arbitrary deadlines. For the latter setting we also develop a polynomial time algorithm achieving a constant factor approximation guarantee. Additionally, we study problem settings where jobs have arbitrary processing requirements and, again, develop constant factor approximation algorithms. We finally transform our offline algorithms into constant competitive online strategies.  相似文献   

11.
光交换网络数据传输时根据数据性质不同,用户对时延要求也有所不同,如何在保证光交换调度效率的同时满足差异化时延需求,是决定网络性能的一个重要因素.目前针对光网络调度的研究主要基于逐个时隙或基于分组进行调度.前者没有考虑重配置开销的问题,无法处理大规模数据交换,后者忽略了不同延迟以及QoS保证的需要.为了解决数据中心光交换数据时延需求不同的问题,本文提出两种新的调度算法SDF (stringent delay first)和m-SDF (m-order stringent delay first),将不同数据包的差异化时延需求、配置顺序、重配置开销和加速比作为考虑因素,在流量调度时采用贪心策略,每次选择对时延最为敏感的数据包进行优先调度以满足时延需求.所提算法在保证投递率的前提下,能最大程度满足更多数据包的传输时延.仿真实验表明两个算法具有较高的时延满足率,证明了调度算法的有效性.  相似文献   

12.
Aperiodic servers in a deadline scheduling environment   总被引:5,自引:0,他引:5  
A real-time system may have tasks with soft deadlines, as well as hard deadlines. While earliest-deadline-first scheduling is effective for hard-deadline tasks, applying it to soft-deadline tasks may waste schedulable processor capacity or sacrifice average response time. Better average response time may be obtained, while still guaranteeing hard deadlines, with an aperiodic server. Three scheduling algorithms for aperiodic servers are described, and schedulability tests are derived for them. A simulation provides performance data for these three algorithms on random aperiodic tasks. The performances of the deadline aperiodic servers are compared with those of several alternatives, including background service, a deadline polling server, and rate-monotonic servers, and with estimates based on the M/M/1 queueing model. This adds to the evidence in support of deadline scheduling,versus fixed priority scheduling.  相似文献   

13.
While energy consumption is the primary concern for the design of real-time embedded systems, reliability and Quality of Service (QoS) are becoming increasingly important in the development of today’s pervasive computing systems. In this paper, we present a reliability-aware energy management (RAEM) scheme for reducing the energy consumption for weakly hard real-time systems with (m, k)-constraints, which requires that at least m out of any k consecutive jobs of a task meet their deadlines. In order to ensure the (m, k)-constraints while preserving the system reliability, we propose to partition the real-time jobs into mandatory and optional ones as well as to reserve recovery space for mandatory ones in an adaptive way. Moreover, efficient on-line scheduling techniques are proposed to reduce the system-wide energy, which is consumed not only by the processor but also by other peripheral devices at run-time. Through extensive simulations, our experiment results demonstrate that the proposed techniques can significantly outperform the previous research in reducing system-wide energy consumption for weakly hard real-time systems while preserving the system reliability.  相似文献   

14.
Anil Singh  Nitin Auluck 《Software》2020,50(11):2012-2030
Fog networks have attracted the attention of researchers recently. The idea is that a part of the computation of a job/application can be performed by fog devices that are located at the network edge, close to the users. Executing latency sensitive applications on the cloud may not be feasible, owing to the significant communication delay involved between the user and the cloud data center (cdc). By the time the application traverses the network and reaches the cloud data center, it might already be too late. However, fog devices, also known as mobile data centers (mdcs), are capable of executing such latency sensitive applications. In this paper, we study the problem of balancing the application load while taking account of security constraints of jobs, across various mdcs in a fog network. In case a particular mdc does not have sufficient capacity to execute a job, the job needs to be migrated to some other mdc. To this end, we propose three heuristic algorithms: minimum distance, minimum load, and minimum hop distance and load (MHDL). In addition, we also propose an ILP-based algorithm called load balancing aware scheduling ILP (LASILP) for solving the task mapping and scheduling problem. The performance of the proposed algorithms have been compared with the cloud only algorithm and another heuristic algorithm called fog-cloud-placement (FCP). Simulation results performed on real-life workload traces reveal that the MHDL heuristic performs better as compared to other scheduling policies in the fog computing environment while meeting application privacy requirements.  相似文献   

15.
We consider online preemptive scheduling problems where jobs have deadlines and the objective is to maximize the total weight of jobs completed before their deadlines. In the first problem, preemptions are not free but incur a penalty. In the second problem, a job has to be accepted or rejected immediately upon arrival, and may need to be immediately allocated a fixed scheduling interval as well; if these accepted jobs are not eventually completed, the job is lost, and a penalty is incurred. We give an algorithm with the optimal competitive ratio for the first problem, and new and improved algorithms for the second problem, under different models of preemptions and job weights.  相似文献   

16.
This paper presents a hybrid memetic algorithm for the problem of scheduling n jobs on m unrelated parallel machines with the objective of maximizing the weighted number of jobs that are completed exactly at their due dates. For each job, due date, weight, and the processing times on different machines are given. It has been shown that when the numbers of machines are a part of input, this problem is NP-hard in the strong sense. At first, the problem is formulated as an integer linear programming model. This model is practical to solve small-size problems. Afterward, a hybrid memetic algorithm is implemented which uses two heuristic algorithms as constructive algorithms, making initial population set. A data oriented mutation operator is implemented so as to facilitate memetic algorithm search process. Performance of all algorithms including heuristics (H1 and H2), hybrid genetic algorithm and hybrid memetic algorithm are evaluated through computational experiments which showed the capabilities of the proposed hybrid algorithm.  相似文献   

17.
This paper introduces an analytical method to approximate the fraction of jobs missing their deadlines in a soft real-time system when the earliest-deadline-first (EDF) scheduling policy is used. In the system, jobs either all have deadlines until the beginning of service (DBS) and are non-preemptive, or have deadlines until the end of service (DES) and are preemptive. In the former case, the system is represented by an M/M/m/EDF+G model, i.e., a multi-sever queue with Poisson arrival, exponential service, and generally distributed relative deadlines. In the latter case, it is represented by an M/M/1/EDF+G model, i.e., a single-server queue with the same specifications as before. EDF is known to be optimal in both of the above cases. The optimality property of EDF scheduling policy is used for the estimation of a key parameter, namely the loss rate when there are n jobs in the system. The estimation is possible by assuming an upper bound and a lower bound for this parameter and then linearly combining these two bounds together. The resulting Markov chains can then be easily solved numerically. Comparing numerical and simulation results, we find that the existing errors are relatively small.  相似文献   

18.
Time-constrained service plays an important role in ubiquitous services. However, the resource constraints of ubiquitous computing systems make it difficult to satisfy timing requirements of supported strategies. In this study, we study scheduling strategies for mobile data program with timing constraints in the form of deadlines. Unlike previously proposed scheduling algorithms for mobile systems which aim to minimize the mean access time, our goal is to identify scheduling algorithms for ubiquitous systems that ensure requests meet their deadlines. We present a study of the performance of traditional real-time strategies, and demonstrate that traditional real-time algorithms do not always perform the best in a mobile environment. We propose an efficient scheduling algorithm, called scheduling priority of mobile data with time constraint(SPMT), which is designed for timely delivery of data to mobile clients. The experimental results show that our approach outperforms other approaches over performance criteria.  相似文献   

19.
To provide timely results for big data analytics, it is crucial to satisfy deadline requirements for MapReduce jobs in today’s production environments. Much effort has been devoted to the problem of meeting deadlines, and typically there exist two kinds of solutions. The first is to allocate appropriate resources to complete the entire job before the specified time limit, where missed deadlines result because of tight deadline constraints or lack of resources; the second is to run a pre-constructed sample based on deadline constraints, which can satisfy the time requirement but fail to maximize the volumes of processed data. In this paper, we propose a deadline-oriented task scheduling approach, named ‘Dart’, to address the above problem. Given a specified deadline and restricted resources, Dart uses an iterative estimation method, which is based on both historical data and job running status to precisely estimate the real-time job completion time. Based on the estimated time, Dart uses an approach–revise algorithm to make dynamic scheduling decisions for meeting deadlines while maximizing the amount of processed data and mitigating stragglers. Dart also efficiently handles task failures and data skew, protecting its performance from being harmed. We have validated our approach using workloads from OpenCloud and Facebook on a cluster of 64 virtual machines. The results show that Dart can not only effectively meet the deadline but also process near-maximum volumes of data even with tight deadlines and limited resources.  相似文献   

20.
Energy efficiency of cloud data centers received significant attention recently as data centers often consume significant resources in operation. Most of the existing energy-saving algorithms focus on resource consolidation for energy efficiency. This paper proposes a simulation-driven methodology with the accurate energy model to verify its performance, and introduces a new resource scheduling algorithm Best-Fit-Decreasing-Power (BFDP) to improve the energy efficiency without degrading the QoS of the system. Both the model and the resource algorithm have been extensively simulated and validated, and results showed that they are effective. In fact, the proposed model and algorithm outperforms the existing resource scheduling algorithms especially under light workloads.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号