首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Data‐driven programming models such as many‐task computing (MTC) have been prevalent for running data‐intensive scientific applications. MTC applies over‐decomposition to enable distributed scheduling. To achieve extreme scalability, MTC proposes a fully distributed task scheduling architecture that employs as many schedulers as the compute nodes to make scheduling decisions. Achieving distributed load balancing and best exploiting data locality are two important goals for the best performance of distributed scheduling of data‐intensive applications. Our previous research proposed a data‐aware work‐stealing technique to optimize both load balancing and data locality by using both dedicated and shared task ready queues in each scheduler. Tasks were organized in queues based on the input data size and location. Distributed key‐value store was applied to manage task metadata. We implemented the technique in MATRIX, a distributed MTC task execution framework. In this work, we devise an analytical suboptimal upper bound of the proposed technique, compare MATRIX with other scheduling systems, and explore the scalability of the technique at extreme scales. Results show that the technique is not only scalable but can achieve performance within 15% of the suboptimal solution. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
Task scheduling is a fundamental issue in achieving high efficiency in cloud computing. However, it is a big challenge for efficient scheduling algorithm design and implementation (as general scheduling problem is NP‐complete). Most existing task‐scheduling methods of cloud computing only consider task resource requirements for CPU and memory, without considering bandwidth requirements. In order to obtain better performance, in this paper, we propose a bandwidth‐aware algorithm for divisible task scheduling in cloud‐computing environments. A nonlinear programming model for the divisible task‐scheduling problem under the bounded multi‐port model is presented. By solving this model, the optimized allocation scheme that determines proper number of tasks assigned to each virtual resource node is obtained. On the basis of the optimized allocation scheme, a heuristic algorithm for divisible load scheduling, called bandwidth‐aware task‐scheduling (BATS) algorithm, is proposed. The performance of algorithm is evaluated using CloudSim toolkit. Experimental result shows that, compared with the fair‐based task‐scheduling algorithm, the bandwidth‐only task‐scheduling algorithm, and the computation‐only task‐scheduling algorithm, the proposed algorithm (BATS) has better performance. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.
The complexity of computing systems introduces a few issues and challenges such as poor performance and high energy consumption. In this paper, we first define and model resource contention metric for high performance computing workloads as a performance metric in scheduling algorithms and systems at the highest level of resource management stack to address the main issues in computing systems. Second, we propose a novel autonomic resource contention‐aware scheduling approach architected on various layers of the resource management stack. We establish the relationship between distributed resource management layers in order to optimize resource contention metric. The simulation results confirm the novelty of our approach.Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
MapReduce programming paradigm has been widely applied to solve large‐scale data‐intensive problems. Intensive studies of MapReduce scheduling have been carried out to improve MapReduce system performance. Delay scheduling is a common way to achieve high data locality and system performance. However, inappropriate delays can lead to low system throughput and potentially break the original job priority constraints. This paper proposes a deadline‐enabled delay (DLD) scheduling algorithm that optimizes job delay decisions according to real‐time resource availability and resource competition, while still meets job deadline constraints. Experimental results illustrate that the resource availability estimation method of DLD is accurate (92%). Compared with other approaches, DLD reduces job turnaround time by 22% in average while keeping a high locality rate (88%).Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
With the rapid development of cloud computing, many distributed data centers have been deployed. This means larger energy consumption requirements from the data center. How to reduce the cost of data center has received significant attention recently. Although there are several efforts in studying energy consumption of the data center, very few have considered modeling and analyzing cost‐aware job scheduling for the cloud data center. To address this emerging problem, we propose a systematic approach that considers both basic elements and their relationships in cloud data center. First, we present a formal language to describe the cloud data center, and a job scheduling net is proposed to formally model the basic elements such as user request, Web portal, data center, and server. Second, we minimize the total cost of the cloud data center by considering the multidimensional resource and local electricity price on the basis of the state space of constructed model. The dynamic job scheduling algorithm and its specific execution steps are proposed based on the alternating direction method of multipliers algorithm. Third, the operational semantics and related theories of Petri nets for establishing the correctness of our proposed method are presented. Finally, a series of simulations are performed to illustrate that the proposed method can guarantee the correct behavior of job scheduling in the cloud data center while meeting the required cost.  相似文献   

6.
Data analysis plays a major role in different research applications that require a large volume of data. Cloud computing can provide computer processing resources and device‐to‐device data sharing based on user requirements. The main goal of cloud computing is to allow users and enterprise of varying capabilities to store and process data in an efficient way and to access and distribute resources. However, a crucial problem in cloud computing is job scheduling for numerous users. Prior to the implementation of job scheduling, jobs must be categorized according to degree of criticalness, privacy and time required. Based on the experimental results, the combination of tasks was successfully determined by the processor. In heterogeneous multiprocessor systems, customized job scheduling is highly critical for obtaining optimal job performance. In this paper, an evolutionary genetic algorithm was used for obtaining better results in job scheduling, thereby improving performance in the cloud system in this regard. The genetic algorithm‐based job scheduling process introduced minimizes the investment in time through effective allocation of user requests in order to enhance the overall efficiency of the system.  相似文献   

7.
Application software execution requests, from mobile devices to cloud service providers, are often heterogeneous in terms of device, network, and application runtime contexts. These heterogeneous contexts include the remaining battery level of a mobile device, network signal strength it receives and quality‐of‐service (QoS) requirement of an application software submitted from that device. Scheduling such application software execution requests (from many mobile devices) on competent virtual machines to enhance user quality of experience (QoE) is a multi‐constrained optimization problem. However, existing solutions in the literature either address utility maximization problem for service providers or optimize the application QoS levels, bypassing device‐level and network‐level contextual information. In this paper, a multi‐objective nonlinear programming solution to the context‐aware application software scheduling problem has been developed, namely, QoE and context‐aware scheduling (QCASH) method, which minimizes the application execution times (i.e., maximizes the QoE) and maximizes the application execution success rate. To the best of our knowledge, QCASH is the first work in this domain that inscribes the optimal scheduling problem for mobile application software execution requests with three‐dimensional context parameters. In QCASH, the context priority of each application is measured by applying min–max normalization and multiple linear regression models on three context parameters—battery level, network signal strength, and application QoS. Experimental results, found from simulation runs on CloudSim toolkit, demonstrate that the QCASH outperforms the state‐of‐the‐art works well across the success rate, waiting time, and QoE. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
在异构Hadoop集群场景中, 为了缓和由于纠删码和副本存储模式混合使用, 以及服务器节点本身实时算力差异造成的MapReduce作业处理效率低下的问题, 本文实现了一种根据数据存储情况和节点实时负载来在多并发场景下动态调节MapReduce作业任务分配情况的调度策略. 该策略通过修改当前Hadoop框架中的数据存储选址策略并对节点任务并发量进行动态控制, 在多作业并发时实现更加均衡的作业间资源分配. 实验结果表明, 相较于Hadoop默认的两种作业调度策略, 本文提出的调度模式能够将作业完成时间缩短约17%, 并有效避免部分作业面临的饥饿现象.  相似文献   

9.
针对传统的云计算任务调度算法存在效率低、利用率不高的问题,采用改进的果蝇算法(improved fruit fly optimization algorithm,IFOA)和遗传算法(genetic algorithm,GA)融合的算法用于处理任务调度。首先,将任务调度转换为DAG(directed acyclic graph,DAG)并通过Kruskal算法将任务调度顺序进行化简;其次,针对果蝇算法的种群采用正交数组和量化技术进行初始化,对果蝇算法边界进行处理,对探索步长进行动态调整,并使用GA算法对个体选择进行选择处理;最后,将融合后生成的算法IFOA-GA用于仿真平台中的云计算任务调度,相对于IGA、IFOA,IPSO算法在QoS的四个指标对比中具有一定的优势,说明IFOA-GA算法能够有效地提高云计算调度效率。  相似文献   

10.
Grid applications with stringent security requirements introduce challenging concerns because the schedule devised by nonsecurity‐aware scheduling algorithms may suffer in scheduling security constraints tasks. To make security‐aware scheduling, estimation and quantification of security overhead is necessary. The proposed model quantifies security, in the form of security levels, on the basis of the negotiated cipher suite between task and the grid‐node and incorporates it into existing heuristics MinMin and MaxMin to make it security‐aware MinMin(SA) and MaxMin(SA). It also proposes SPMaxMin (Security Prioritized MinMin) and its comparison with three heuristics MinMin(SA), MaxMin(SA), and SPMinMin on heterogeneous grid/task environment. Extensive computer simulation results reveal that the performance of the various heuristics varies with the variation in computational and security heterogeneity. Its analysis over nine heterogeneous grid/task workload situations indicates that an algorithm that performs better for one workload degrades in another. It is conspicuous that for a particular workload one algorithm gives better makespan while another gives better response time. Finally, a security‐aware scheduling model is proposed, which adapts itself to the dynamic nature of the grid and picks the best suited algorithm among the four analyzed heuristics on the basis of job characteristics, grid characteristics, and desired performance metric. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

11.
郑莉华  曾雪 《计算机应用研究》2013,30(10):3139-3141
提出一个基于MapReduce的并行视频编码架构, 将源视频切分后以任务的形式分发到不同的处理器上并行地进行编码处理, 以达到提高编码速度的目的。为使整个任务的完成时间最短并平衡负载, 系统综合考虑视频编码特点及处理器的处理能力, 给出LBMM(load balance maximal-minimal complete time)算法。仿真结果显示提出的并行视频编码架构极大地改善了大数据量视频序列的编码效率, 减少了作业的平均响应时间。LBMM与Min-Min算法和CloudSim现有的轮循调度算法相比视频编码速度更快。  相似文献   

12.
Reducing power consumption has been an essential requirement for Cloud resource providers not only to decrease operating costs, but also to improve the system reliability. As Cloud computing becomes emergent for the Anything as a Service (XaaS) paradigm, modern real‐time services also become available through Cloud computing. In this work, we investigate power‐aware provisioning of virtual machines for real‐time services. Our approach is (i) to model a real‐time service as a real‐time virtual machine request; and (ii) to provision virtual machines in Cloud data centers using dynamic voltage frequency scaling schemes. We propose several schemes to reduce power consumption by hard real‐time services and power‐aware profitable provisioning of soft real‐time services. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

13.
It is well known that data centers are consuming a large amount of energy that incurs significant financial and environmental costs. Recently, there has been an increasing interest in utilizing green energy for data centers, where green energy sources include solar and wind. This paper studies the crucial problem of maximizing the utilization of green energy through scheduling complex jobs in data centers in order to reduce the use of traditional brown energy. However, it is highly challenging for data centers to make use of green energy. First, the availability of typical green energy is variable to dynamic changes of natural environments, for example, weather. Second, although predictions can be made for the future availability of green energy, it is inevitable that such predictions have errors. Third, jobs are associated with strict deadlines, and it is required that jobs are completed before their deadlines. Finally, because the reliability in a data center relies upon temperature, the awareness of temperature should be taken into account while maximizing the green energy. In this paper, we consider online scheduling of jobs whose arrivals to the data center system dynamically. In addition, we explicitly take the power consumption of switches into account when scheduling jobs onto computing nodes. Two solar energy‐aware algorithms called SEEDMin and SEEDMax have been proposed. Then, we extend SEED to RSEED with the awareness of reliability. To evaluate the effectiveness of the proposed algorithms, comprehensive simulations have been conducted, and the proposed algorithms are compared with other state‐of‐art algorithms. Experimental results demonstrate that both SEEDMin and SEEDMax can significantly increase the utilization of solar energy without violating job deadlines and overall energy budget. The amount of solar energy utilized by SEEDMin and SEEDMax is 33.4%and35.3% larger than that of two traditional scheduling algorithms, MinMin and MinMax, respectively. Also, it can be seen that RSEED greatly improves the reliability by decreasing the temperature. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
Utilization of cloud computing resources has made a fast growth in e‐business. Business and government agencies often need to handle large volume of service requests, the so‐called instance‐intensive business processes in a constrained period. On‐time completion for instance‐intensive business processes within the constrained time is a very important issue. In the past few years, traditional optimal task scheduling has been well researched and proven to be a nondeterministic polynomial (NP) time–complete problem. So many heuristic and metaheuristic algorithms are put forward to solve the issue with near‐optimal solutions. However, most of them just treat a single workflow instance as a multistep task without considering that steps within a task can be different types of activities. To explain multistep features of business workflows, a typical motivating instance‐intensive business example of security exchange and a multistep scheduling model for business workflows are introduced in this paper. Then our near‐optimal dynamic priority scheduling (DPS) strategy is proposed on the basis of the idea of Min‐Min heuristic algorithm and greedy philosophy. Compared to the first come first served and constrained Min‐Min by makespan and standard deviation, DPS can make a more optimized choice in each round of scheduling towards overall outcome. To show the effectiveness of DPS, theoretical minimum execution time (METtheory) is used as a benchmark for evaluation based on simulation. The results show that the ratios between METtheory and DPS are more than 98.5% by scheduling different orders of magnitude tasks from 1000 to 1 000 000. In particular, the ratio between METtheory and DPS is nearly 99.9% with 1 000 000 tasks, which means that our DPS can get the near‐optimal result when scheduling large number of tasks.  相似文献   

15.
云制造技术给制造企业带来机遇的同时,也为其制造执行系统MES的设计与实现带来了新的挑战。为了解决单件小批MES中作业计划与调度优化问题,首先设计了一个从作业计划静态制定,到作业执行情况实时监控与主动感知,再到异常事件智能响应,最后到作业调度动态调节的闭环体系结构。接着针对异常信息实时获取与异常事件发现、异常事件智能化处理以及作业计划与调度优化算法计算能力服务化三个子问题,依次进行了问题分析并给出了技术解决方案。最后,以哈尔滨电机厂为案例对象,综合利用IEC/ISO 62264标准、大数据分析与挖掘方法以及由虚拟化、服务化和SOA等组成的云计算技术实现了单件小批MES作业计划与调度综合优化系统,验证了上述理论与方法的有效性。  相似文献   

16.
Mobile edge cloud computing has been a promising computing paradigm, where mobile users could offload their application workloads to low‐latency local edge cloud resources. However, compared with remote public cloud resources, conventional local edge cloud resources are limited in computation capacity, especially when serve large number of mobile applications. To deal with this problem, we present a hierarchical edge cloud architecture to integrate the local edge clouds and public clouds so as to improve the performance and scalability of scheduling problem for mobile applications. Besides, to achieve a trade‐off between the cost and system delay, a fault‐tolerant dynamic resource scheduling method is proposed to address the scheduling problem in mobile edge cloud computing. The optimization problem could be formulated to minimize the application cost with the user‐defined deadline satisfied. Specifically, firstly, a game‐theoretic scheduling mechanism is adopted for resource provisioning and scheduling for multiprovider mobile applications. Then, a mobility‐aware dynamic scheduling strategy is presented to update the scheduling with the consideration of mobility of mobile users. Moreover, a failure recovery mechanism is proposed to deal with the uncertainties during the execution of mobile applications. Finally, experiments are designed and conducted to validate the effectiveness of our proposal. The experimental results show that our method could achieve a trade‐off between the cost and system delay.  相似文献   

17.
MapReduce编程模型被广泛应用于大数据处理平台,而一个有效的任务调度算法对模型的运行效率至关重要。将MapReduce工作流的Map和Reduce阶段分别拆解为若干个有先后序限定关系的作业,每个作业再拆解为多个任务。之后基于计算集群的可用资源和任务异构性,构建面向作业和任务的2级有向无环图(DAG)模型,同时提出基于2级优先级排序的异构调度算法2-MRHS。算法的第1阶段进行优先级排序,即对作业和任务分别进行优先权值计算,再汇总得到任务的调度队列;第2阶段进行任务分配,即基于最快完成时间将每个任务所包含的数据块子任务分配给最适合的计算结点。采用大批量随机生成的DAG模型进行实验,结果表明与其他相关算法相比,本文算法有更短的调度长度(makespan)且更加稳定。  相似文献   

18.
基于云计算的存储和计算架构的特征上,对资源存储算法和任务分配进行了研究.针对云计算的资源管理中单纯考虑算法的时间和空间复杂度,而忽略在数据链路层因调度所消耗的时间问题,因此将网络存储感知和贪心算法相结合,提出了一种贪心改进算法,目的在于大幅减少数据在数据链路层所消耗的时间.最终在CloudSim平台上进行云环境下的仿真,将得出的结果和一般的贪心算法相比较,经过对比分析表明:改进后的贪心算法对于任务的执行而言时间更短,效率更高.  相似文献   

19.
按照大数据定义,油气勘探数据处理工作显然是一种大数据应用模式,而作为油气勘探核心工作平台的勘探云计算中心,其建设目的首先是为了满足企业内部业务数据处理工作需求,在确保计算力分享和协同工作的基础上,勘探云更注重数据存储、数据管理以及数据应用业务的支持,对照一般公用云建设模式,勘探私有云建设具备其独特的建设模式。笔者企业结合自身特点,对照当前大数据应用需求开展勘探私有云建设工作,取得了一定的效果,对企业主营业务的支持效果明显。  相似文献   

20.
MapReduce是一个能够对大规模数据进行分布式处理的框架,目前被各个领域广泛应用。在提供MapReduce服务的集群中,如何保证不同优先级用户的截止时间限定是MapReduce作业调度问题的一个挑战。针对这一问题,提出了一个基于排队网络的多优先级作业调度算法(MPSA)。首先分析和归纳了基于MapReduce模型的算法,提出了三种常见模式,采用Jackson排队网络对基于MapReduce模型的算法建立了数学模型,应用该网络模型可以求出不同优先级队列对资源的需求;随后使用AR(1)模型进行预测,使算法可以动态地适应不同的用户访问量;利用二分查找算法,分步计算出不同优先级在map阶段和reduce阶段分配的槽位数;最后实现了在MapReduce模型中应用的实时调度算法。实验结果表明,与传统的FIFO和公平调度算法相比,本文提出的算法在用户到达率和任务规模变化的情况下,可以更加有效地满足不同优先级用户的截止时间限定。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号