首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We study the problem of one-dimensional partitioning of nonuniform workload arrays, with optimal load balancing for heterogeneous systems. We look at two cases: chain-on-chain partitioning, where the order of the processors is specified, and chain partitioning, where processor permutation is allowed. We present polynomial time algorithms to solve the chain-on-chain partitioning problem optimally, while we prove that the chain partitioning problem is NP-complete. Our empirical studies show that our proposed exact algorithms produce substantially better results than heuristics, while solution times remain comparable.  相似文献   

2.
This work presents a novel parallel micro evolutionary algorithm for scheduling tasks in distributed heterogeneous computing and grid environments. The scheduling problem in heterogeneous environments is NP-hard, so a significant effort has been made in order to develop an efficient method to provide good schedules in reduced execution times. The parallel micro evolutionary algorithm is implemented using MALLBA, a general-purpose library for combinatorial optimization. Efficient numerical results are reported in the experimental analysis performed on both well-known problem instances and large instances that model medium-sized grid environments. The comparative study of traditional methods and evolutionary algorithms shows that the parallel micro evolutionary algorithm achieves a high problem solving efficacy, outperforming previous results already reported in the related literature, and also showing a good scalability behavior when facing high dimension problem instances.  相似文献   

3.
针对更实际的异构集群计算环境,充分考虑处理机具有不同的计算速度、通信能力和存储容量的特性,通过允许计算和通信操作重叠执行,采取多次并行分配计算任务的方法,设计一种可分负载多轮调度算法。实验结果表明,该算法不但能获得与均匀多轮调度(UMR)算法相当的渐近最优调度时间长度,并且能够处理更大规模的应用负载,实用性更强。  相似文献   

4.
The statistical properties of training, validation and test data play an important role in assuring optimal performance in artificial neural networks (ANNs). Researchers have proposed optimized data partitioning (ODP) and stratified data partitioning (SDP) methods to partition of input data into training, validation and test datasets. ODP methods based on genetic algorithm (GA) are computationally expensive as the random search space can be in the power of twenty or more for an average sized dataset. For SDP methods, clustering algorithms such as self organizing map (SOM) and fuzzy clustering (FC) are used to form strata. It is assumed that data points in any individual stratum are in close statistical agreement. Reported clustering algorithms are designed to form natural clusters. In the case of large multivariate datasets, some of these natural clusters can be big enough such that the furthest data vectors are statistically far away from the mean. Further, these algorithms are computationally expensive as well. We propose a custom design clustering algorithm (CDCA) to overcome these shortcomings. Comparisons are made using three benchmark case studies, one each from classification, function approximation and prediction domains. The proposed CDCA data partitioning method is evaluated in comparison with SOM, FC and GA based data partitioning methods. It is found that the CDCA data partitioning method not only perform well but also reduces the average CPU time.  相似文献   

5.
In this paper we analyze scheduling multiple divisible loads on a star-connected system of identical processors. It is shown that this problem is computationally hard. Some special cases appear to be particularly difficult, so it is not even known if they belong to the class NP. Exponential algorithms and special cases solvable in polynomial time are presented. M. Drozdowski’s research partially supported by Polish Ministry of Science and Higher Education.  相似文献   

6.
Genetic algorithms for task scheduling problem   总被引:1,自引:0,他引:1  
The scheduling and mapping of the precedence-constrained task graph to processors is considered to be the most crucial NP-complete problem in parallel and distributed computing systems. Several genetic algorithms have been developed to solve this problem. A common feature in most of them has been the use of chromosomal representation for a schedule. However, these algorithms are monolithic, as they attempt to scan the entire solution space without considering how to reduce the complexity of the optimization process. In this paper, two genetic algorithms have been developed and implemented. Our developed algorithms are genetic algorithms with some heuristic principles that have been added to improve the performance. According to the first developed genetic algorithm, two fitness functions have been applied one after the other. The first fitness function is concerned with minimizing the total execution time (schedule length), and the second one is concerned with the load balance satisfaction. The second developed genetic algorithm is based on a task duplication technique to overcome the communication overhead. Our proposed algorithms have been implemented and evaluated using benchmarks. According to the evolved results, it has been found that our algorithms always outperform the traditional algorithms.  相似文献   

7.
The demand for the maritime transportation has significantly increased over the past 20 years due to the rapid pace of globalization. Terminal managers confront the challenge in establishing the appropriate quay crane schedule to achieve the earliest departure time of ship and provide efficient service. In general, quay crane schedule problems include two main issues (1) the allocation of quay cranes to handle the discharging and loading operations, and (2) the service sequence of ship bays in a vessel of each quay crane. Traditionally, the terminal planners determine the quay crane schedule based on their experience and own judgment. In addition, the interference among cranes and the increased in ship size further magnify its difficulty dramatically. Accordingly, this paper proposed a modified genetic algorithm to deal with the problem. To test the optimization reliability of the proposed algorithm, a set of well known benchmarking problem is solved, and the results obtained are being compared with other well known existing algorithms. The comparison demonstrates that the proposed algorithm performs as good as many existing algorithms and obtains better solutions than the best known ones in certain instances. In addition, the computational time(s) required are significantly much lesser, allowing it to be more applicable in practical situation.  相似文献   

8.
This paper investigates the real-time scheduling problem for handling heterogeneous divisible loads on cluster systems. Divisible load applications occur in many fields of science and engineering. Such applications can be easily parallelized in a master–worker fashion, but pose several scheduling challenges. We consider divisible loads associated with deadlines to enhance quality-of-service (QoS) and provide performance guarantees in distributed computing environments. In addition, since the divisible loads to be performed may widely vary in terms of their required hardware and software, we capture the loads’ various processing requirements in our load distribution strategies, a unique feature that is applicable for running proprietary applications only on certain eligible processing nodes. Thus in our problem formulation each load can only be processed by certain processors as both the loads and processors are heterogeneous. We propose scheduling algorithms referred to as Requirements-Aware Real-Time Scheduling (RARTS) algorithms, which consist of a novel scheduling policy, referred to as Minimum Slack Capacity First (MSCF), and two multi-round load distribution strategies, referred to as All Eligible Processors (AEP) and Least Capability First (LCF). We perform rigorous performance evaluation studies to quantify the performance of our strategies on a variety of scenarios.  相似文献   

9.
For fine grain task graphs, duplication-based scheduling algorithms are generally more efficient than list and cluster-based algorithms. However, most duplication-based heuristics try to duplicate all possible ancestor nodes of a given join node, in order to reduce the earliest start time (EST) of the join node, even though these ancestor nodes have already been allocated in previous steps. Thus, these duplication heuristics inevitably induce redundant duplications, which lead to the superfluous consumption of resources and generally deteriorate the scheduling result in the case of a bounded number of processors. When scheduling algorithms are used on an unbounded number of processors, the required number of processors grows excessively with the size of the task graph, thereby limiting the practicality of these algorithms for large task graphs. In this paper, we propose a novel algorithm designed to allocate join nodes without redundant duplications. In the proposed algorithm, if the ancestor nodes of a join node are duplicated when scheduling the join node, the original allocations of these ancestor nodes are removed using a very efficient method. The performance of the proposed algorithm, in terms of its normalized schedule length and efficiency, is compared with that of some of the recently proposed algorithms. The proposed algorithm generates better or comparable schedules with minimized duplication. Specifically, the simulation results show that it is most useful on a bounded number of processors.  相似文献   

10.
In a Grid computing system, many distributed scientific and engineering applications often require multi-institutional collaboration, large-scale resource sharing, wide-area communication, etc. Applications executing in such systems inevitably encounter different types of failures such as hardware failure, program failure, and storage failure. One way of taking failures into account is to employ a reliable scheduling algorithm. However, most existing Grid scheduling algorithms do not adequately consider the reliability requirements of an application. In recognition of this problem, we design a hierarchical reliability-driven scheduling architecture that includes both a local scheduler and a global scheduler. The local scheduler aims to effectively measure task reliability of an application in a Grid virtual node and incorporate the precedence constrained tasks’ reliability overhead into a heuristic scheduling algorithm. In the global scheduler, we propose a hierarchical reliability-driven scheduling algorithm based on quantitative evaluation of independent application reliability. Our experiments, based on both randomly generated graphs and the graphs of some real applications, show that our hierarchical scheduling algorithm performs much better than the existing scheduling algorithms in terms of system reliability, schedule length, and speedup.  相似文献   

11.
We consider the problem of scheduling an application on a computing system consisting of heterogeneous processors and data repositories. The application consists of a large number of file-sharing otherwise independent tasks. The files initially reside on the repositories. The processors and the repositories are connected through a heterogeneous interconnection network. Our aim is to assign the tasks to the processors, to schedule the file transfers from the repositories, and to schedule the executions of tasks on each processor in such a way that the turnaround time is minimized. We propose a heuristic composed of three phases: initial task assignment, task assignment refinement, and execution ordering. We experimentally compare the proposed heuristics with three well-known heuristics on a large number of problem instances. The proposed heuristic runs considerably faster than the existing heuristics and obtains 10–14% better turnaround times than the best of the three existing heuristics.  相似文献   

12.
This paper investigates the scheduling problem of parallel identical batch processing machines in which each machine can process a group of jobs simultaneously as a batch. Each job is characterized by its size and processing time. The processing time of a batch is given by the longest processing time among all jobs in the batch. Based on developing heuristic approaches, we proposed a hybrid genetic heuristic (HGH) to minimize makespan objective. To verify the performance of our algorithm, comparisons are made through using a simulated annealing (SA) approach addressed in the literature as a comparator algorithm. Computational experiments reveal that affording the knowledge of problem through using heuristic procedures, gives HGH the ability of finding optimal or near optimal solutions in a reasonable time.  相似文献   

13.
This paper introduces a method to combine the advantages of both task parallelism and fine-grained co-design specialisation to achieve faster execution times than either method alone on distributed heterogeneous architectures. The method uses a novel mixed integer linear programming formalisation to assign code sections from parallel tasks to share computational components with the optimal trade-off between acceleration from component specialism and serialisation delay. The paper provides results for software benchmarks partitioned using the method and formal implementations of previous alternatives to demonstrate both the practical tractability of the linear programming approach and the increase in program acceleration potential deliverable.  相似文献   

14.
In this paper, we have proposed a novel use of data mining algorithms for the extraction of knowledge from a large set of flow shop schedules. The purposes of this work is to apply data mining methodologies to explore the patterns in data generated by an ant colony algorithm performing a scheduling operation and to develop a rule set scheduler which approximates the ant colony algorithm's scheduler. Ant colony optimization (ACO) is a paradigm for designing metaheuristic algorithms for combinatorial optimization problems. The natural metaphor on which ant algorithms are based is that of ant colonies. Fascinated by the ability of the almost blind ants to establish the shortest route from their nests to the food source and back, researchers found out that these ants secrete a substance called ‘pheromone’ and use its trails as a medium for communicating information among each other. The ant algorithm is simple to implement and results of the case studies show its ability to provide speedy and accurate solutions. Further, we employed the genetic algorithm operators such as crossover and mutation to generate the new regions of solution. The data mining tool we have used is Decision Tree, which is produced by the See5 software after the instances are classified. The data mining is for mining the knowledge of job scheduling about the objective of minimization of makespan in a flow shop environment. Data mining systems typically uses conditional relationships represented by IF-THEN rules and allowing the production managers to easily take the decisions regarding the flow shop scheduling based on various objective functions and the constraints.  相似文献   

15.
In cloud computing, cost optimization is a prime concern for load scheduling. The swarm based meta-heuristics are prominently used for load scheduling in distributed computing environment. The conventional load scheduling approaches require a lot of resources and strategies which are non-adaptive and static in the computation, thereby increasing the response time, waiting time and the total cost of computation. The swarm intelligence-based load scheduling is adaptive, intelligent, collective, random, decentralized, self-collective, stochastic and is based on biologically inspired mechanisms than the other conventional mechanisms. The genetic algorithm schedules the particles based on mutation and crossover techniques. The force and acceleration acting on the particle helps in the finding the velocity and position of the next particle. The best position of the particles is assigned to cloudlets to be executed on the virtual machines in the cloud. The paper proposes a new load scheduling technique, Hybrid Genetic-Gravitational Search Algorithm (HG-GSA) for reducing the total cost of computation. The total computational cost includes cost of execution and transfer. It works on hybrid crossover technique based gravitational search algorithm for searching the best position of the particle in the search space. The best position of the particle is used calculating the force. The HG-GSA is compared to the existing approaches in the CloudSim simulator. By the convergence and statistical analysis of the results, the proposed HG-GSA approach reduces the total cost of computation considerably as compared to existing PSO, Cloudy-GSA and LIGSA-C approaches.  相似文献   

16.
针对动态负载均衡算法在异构云环境中的任务迁移次数过多的问题,提出了一种最小化任务迁移次数的动态负载均衡(MMLB)算法。MMLB算法通过自适应阈值对虚拟机进行分组、任务选择算法最小化任务迁移的次数、任务调度算法优化任务分配实现了任务的再分配。将MMLB与WRR、HBBLB、LBF算法进行实验对比分析,MMLB算法在Makespan、平均任务响应时间、负载不均衡度等评价指标上表现更优,并且有效降低了任务迁移的次数。实验结果验证了MMLB算法的可行性和有效性。  相似文献   

17.
Operator scheduling in data stream systems   总被引:5,自引:0,他引:5  
In many applications involving continuous data streams, data arrival is bursty and data rate fluctuates over time. Systems that seek to give rapid or real-time query responses in such an environment must be prepared to deal gracefully with bursts in data arrival without compromising system performance. We discuss one strategy for processing bursty streams - adaptive, load-aware scheduling of query operators to minimize resource consumption during times of peak load. We show that the choice of an operator scheduling strategy can have significant impact on the runtime system memory usage as well as output latency. Our aim is to design a scheduling strategy that minimizes the maximum runtime system memory while maintaining the output latency within prespecified bounds. We first present Chain scheduling, an operator scheduling strategy for data stream systems that is near-optimal in minimizing runtime memory usage for any collection of single-stream queries involving selections, projections, and foreign-key joins with stored relations. Chain scheduling also performs well for queries with sliding-window joins over multiple streams and multiple queries of the above types. However, during bursts in input streams, when there is a buildup of unprocessed tuples, Chain scheduling may lead to high output latency. We study the online problem of minimizing maximum runtime memory, subject to a constraint on maximum latency. We present preliminary observations, negative results, and heuristics for this problem. A thorough experimental evaluation is provided where we demonstrate the potential benefits of Chain scheduling and its different variants, compare it with competing scheduling strategies, and validate our analytical conclusions.Received: 18 October 2003, Accepted: 16 April 2004, Published online: 14 September 2004Edited by: J. Gehrke and J. HellersteinBrian Babcock: Supported in part by a Rambus Corporation Stanford Graduate Fellowship and NSF Grant IIS-0118173.Shivnath Babu: Supported in part by NSF Grants IIS-0118173 and IIS-9817799.Mayur Datar: Supported in part by Siebel Scholarship and NSF Grant IIS-0118173.Rajeev Motwani: Supported in part by NSF Grant IIS-0118173, an Okawa Foundation Research Grant, an SNRC grant, and grants from Microsoft and Veritas.Dilys Thomas: Supported by NSF Grant EIA-0137761 and NSF ITR Award Number 0331640.  相似文献   

18.
Efficient task scheduling on heterogeneous distributed computing systems (HeDCSs) requires the consideration of the heterogeneity of processors and the inter-processor communication. This paper presents a two-phase algorithm, called H2GS, for task scheduling on HeDCSs. The first phase implements a heuristic list-based algorithm, called LDCP, to generate a high quality schedule. In the second phase, the LDCP-generated schedule is injected into the initial population of a customized genetic algorithm, called GAS, which proceeds to evolve shorter schedules. GAS employs a simple genome composed of a two-dimensional chromosome. A mapping procedure is developed which maps every possible genome to a valid schedule. Moreover, GAS uses customized operators that are designed for the scheduling problem to enable an efficient stochastic search. The performance of each phase of H2GS is compared to two leading scheduling algorithms, and H2GS outperforms both algorithms. The improvement in performance obtained by H2GS increases as the inter-task communication cost increases.  相似文献   

19.
Scheduling in flexible manufacturing systems (FMS) must take account of the shorter lead-time, the multiprocessing environment, the flexibility of alternative workstations with different processing times, and the dynamically changing states. The best scheduling approach, as described here, is to minimize makespan t M, total flow time t F, and total tardiness penalty p T. However, in the case of manufacturing system problems, it is difficult for those with traditional optimization techniques to cope with this. This article presents a new flow network-based hybrid genetic algorithm (hGA) approach for generating static schedules in a FMS environment. The proposed method is combined with the neighborhood search technique in a mutation operation to improve the solution of the FMS problem, and to enhance the performance of the genetic search process. We update the change in swap mutation and the local search-based mutation ration. Numerical experiments show that the proposed flow network-based hGA is both effective and efficient for FMS problems.This work was presented in part at the 8th International Symposium on Artificial Life and Robotics, Oita, Japan, January 24–26, 2003  相似文献   

20.
In this paper, we develop a simulation-based two-phase genetic algorithm for the capacitated re-entrant line scheduling problem. The structure of a chromosome consists of two sub-chromosomes for buffer allocation and server allocation, respectively, while considering all possible states of the system in terms of buffer levels of workstations and assigning a preferred job stage to each component of the chromosome. As an implementation of the suggested algorithm, a job priority-based randomized policy is defined, which reflects the job priority and the properness of local non-idling in allocating buffering and processing capacity to available job instances. The algorithm is combined with a polynomial time sub-optimal deadlock avoidance policy, namely, Bankers algorithm, and the fitness of a chromosome was evaluated based on simulation. The performance of the proposed algorithm is evaluated through a numerical experiment, showing that the suggested approach holds considerable promise for providing effective and computationally efficient approximations to the optimal scheduling policy that consistently outperforms the typically employed heuristics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号