首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
This paper presents a heuristic algorithm for solving a job-shop scheduling problem with sequence dependent setup times and min/max separation constraints among the activities (SDST-JSSP/max). The algorithm relies on a core constraint-based search procedure, which generates consistent orderings of activities that require the same resource by incrementally imposing precedence constraints on a temporally feasible solution. Key to the effectiveness of the search procedure is a conflict sampling method biased toward selection of most critical conflicts and coupled with a non-deterministic choice heuristic to guide the base conflict resolution process. This constraint-based search is then embedded within a larger iterative-sampling search framework to broaden search space coverage and promote solution optimization. The efficacy of the overall heuristic algorithm is demonstrated empirically both on a set of previously studied job-shop scheduling benchmark problems with sequence dependent setup times and by introducing a new benchmark with setups and generalized precedence constraints.  相似文献   

2.
We propose a novel, exact any-time search strategy that combines iterative deepening \(\text{ A}\) * ( \(\text{ IDA}\) *) with depth-first search and we consider the job shop scheduling problem with makespan minimization as a test bed. The combination of these search strategies is done so that limited depth-first searches are issued from some of the states distributed along the frontier reached by \(\text{ IDA}\) * in each iteration. In this way, a proper equilibrium between intensification and diversification search effort is achieved while the algorithm keeps the capability of obtaining tight lower bounds. To evaluate the proposed strategy and to compare it with other methods, we have conducted an experimental study involving a number of conventional benchmarks with instances of various sizes. The results of these experiments show that the proposed algorithm takes less time than other methods in reaching optimal solutions for small and medium-size instances, and that it is quite competitive in reaching good solutions and good lower bounds for the instances that cannot be optimally solved.  相似文献   

3.
A space-bounded Stack Machine is a regular Turing Machine with a read-only input tape, several space-bounded read-write work tapes, and an unbounded stack. Stack Machines with a logarithmic space bound have been connected to other classical models of computation, such as polynomial-time Turing Machines (P) (Cook in J Assoc Comput Mach 18:4–18, 1971) and polynomial size, polylogarithmic depth, bounded fan-in circuits (NC) e.g., Borodin et al. (SIAM J Comput 18, 1989). In this paper, we present significant new lower bounds and techniques for Stack Machines. This comes in the form of a trade-off lower bound between space and number of passes over the input tape. Specifically, we give an explicit permuted inner product function such that any Stack Machine computing this function requires either ${\Omega (N^{1/4 - \epsilon})}$ space or ${\Omega (N^{1/4 - \epsilon})}$ number of passes for every constant ${\epsilon > 0}$ , where N is the input size. In the case of logarithmic space Stack Machines, this yields an unconditional ${\Omega (N^{1/4 - \epsilon})}$ lower bound for the number of passes. To put this result in perspective, we note that Stack Machines with logarithmic space and a single pass over the input can compute Parity, Majority, as well as certain languages outside NC. The latter follows from Allender (J Assoc Comput Mach 36:912–928, 1989), conditional on the widely believed complexity assumption that PSPACE ${\subsetneq}$ EXP. Our technique is a novel communication complexity reduction, thereby extending the already wide range of models of computation for which communication complexity can be used to obtain lower bounds. Informally, we show that a k-player number-in-hand (NIH) communication protocol for a base function f can efficiently simulate a space- and pass-bounded Stack Machine for a related function F, which consists of several “permuted” instances of f, bundled together by a combining function h. Trade-off lower bounds for Stack Machines then follow from known communication complexity lower bounds. The framework for this reduction was given by Beame & Huynh-Ngoc (2008), who used it to obtain similar trade-off lower bounds for Turing Machines with a constant number of pass-bounded external tapes. We also prove that the latter cannot efficiently simulate Stack Machines, conditional on the complexity assumption that E ${\not \subset}$ PSPACE. It is the treatment of an unbounded stack which constitutes the main technical novelty in our communication complexity reduction.  相似文献   

4.
We provide new bounds for the worst case approximation ratio of the classic Longest Processing Time (Lpt) heuristic for related machine scheduling (Q||C max?). For different machine speeds, Lpt was first considered by Gonzalez et al. (SIAM J. Comput. 6(1):155–166, 1977). The best previously known bounds originate from more than 20 years back: Dobson (SIAM J. Comput. 13(4):705–716, 1984), and independently Friesen (SIAM J. Comput. 16(3):554–560, 1987) showed that the worst case ratio of Lpt is in the interval (1.512,1.583), and in (1.52,1.67), respectively. We tighten the upper bound to $1+\sqrt{3}/3\approx1.5773We provide new bounds for the worst case approximation ratio of the classic Longest Processing Time (Lpt) heuristic for related machine scheduling (Q||C max ). For different machine speeds, Lpt was first considered by Gonzalez et al. (SIAM J. Comput. 6(1):155–166, 1977). The best previously known bounds originate from more than 20 years back: Dobson (SIAM J. Comput. 13(4):705–716, 1984), and independently Friesen (SIAM J. Comput. 16(3):554–560, 1987) showed that the worst case ratio of Lpt is in the interval (1.512,1.583), and in (1.52,1.67), respectively. We tighten the upper bound to 1+?3/3 ? 1.57731+\sqrt{3}/3\approx1.5773 , and the lower bound to 1.54. Although this improvement might seem minor, we consider the structure of potential lower bound instances more systematically than former works. We present a scheme for a job-exchanging process, which, repeated any number of times, gradually increases the lower bound. For the new upper bound, this systematic method together with a new idea of introducing fractional jobs, facilitated a proof that is surprisingly simple, relative to the result. We present the upper-bound proof in parameterized terms, which leaves room for further improvements.  相似文献   

5.
Although the earliest-deadline-first (EDF) policy is known to be optimal for preemptive real-time task scheduling in uniprocessor systems, the schedulability analysis problem has recently been shown to be $\mathit{co}\mathcal{NP}$ -hard. Therefore, approximation algorithms, and in particular, approximations based on resource augmentation have attracted a lot of attention for both uniprocessor and multiprocessor systems. Resource augmentation based approximations assume a certain speedup of the processor(s). Using the notion of approximate demand bound function (dbf), in this paper we show that for uniprocessor systems the resource augmentation factor is at most $\frac{2e-1}{e} \approx1.6322$ , where e is the Euler number. We approximate the dbf using a linear approximation when the analysis interval length of interest is larger than the relative deadline of the task. For identical multiprocessor systems with M processors and constrained-deadline task sets, we show that the deadline-monotonic partitioning (that has been proposed by Baruah and Fisher) with the approximate dbf leads to an approximation factor of $\frac{3e-1}{e}-\frac{1}{M} \approx 2.6322-\frac{1}{M}$ with respect to resource augmentation. We also show that the corresponding factor is $3-\frac{1}{M}$ for arbitrary-deadline task sets. The best known results so far were $3-\frac{1}{M}$ for constrained-deadline tasks and $4-\frac {2}{M}$ for arbitrary-deadline ones. Our tighter analysis exploits the structure of the approximate dbf directly and uses the processor utilization violations (which were ignored in all previous analysis) for analyzing resource augmentation factors. We also provide concrete input instances to show that the lower bound on the resource augmentation factor for uniprocessor systems—using the above approximate dbf—is 1.5, and the corresponding bound is 2.5 for identical multiprocessor systems with an arbitrary order of fitting and a large number of processors. Further, we also provide a polynomial-time approximation scheme (PTAS) to derive near-optimal solutions under the assumption that the ratio of the maximum relative deadline to the minimum relative deadline of tasks is a constant, which is a more relaxed assumption compared to the assumptions required for deriving such a PTAS in the past.  相似文献   

6.
Matrix models are ubiquitous for constraint problems. Many such problems have a matrix of variables $\mathcal{M}$ , with the same constraint C defined by a finite-state automaton $\mathcal{A}$ on each row of $\mathcal{M}$ and a global cardinality constraint $\mathit{gcc}$ on each column of $\mathcal{M}$ . We give two methods for deriving, by double counting, necessary conditions on the cardinality variables of the $\mathit{gcc}$ constraints from the automaton $\mathcal{A}$ . The first method yields linear necessary conditions and simple arithmetic constraints. The second method introduces the cardinality automaton, which abstracts the overall behaviour of all the row automata and can be encoded by a set of linear constraints. We also provide a domain consistency filtering algorithm for the conjunction of lexicographic ordering constraints between adjacent rows of $\mathcal{M}$ and (possibly different) automaton constraints on the rows. We evaluate the impact of our methods in terms of runtime and search effort on a large set of nurse rostering problem instances.  相似文献   

7.
With the rapid development of networking technology, grid computing has emerged as a source for satisfying the increasing demand of the computing power of scientific computing community. Mostly, the user applications in scientific and enterprise domains are constructed in the form of workflows in which precedence constraints between tasks are defined. Scheduling of workflow applications belongs to the class of NP-hard problems, so meta-heuristic approaches are preferred options. In this paper, $\varepsilon $ -fuzzy dominance sort based discrete particle swarm optimization ( $\varepsilon $ -FDPSO) approach is used to solve the workflow scheduling problem in the grid. The $\varepsilon $ -FDPSO approach has never been used earlier in grid scheduling. The metric, fuzzy dominance which quantifies the relative fitness of solutions in multi-objective domain is used to generate the Pareto optimal solutions. In addition, the scheme also incorporates a fuzzy based mechanism to determine the best compromised solution. For the workflow applications two scheduling problems are solved. In one of the scheduling problems, we addressed two major conflicting objectives, i.e. makespan (execution time) and cost, under constraints (deadline and budget). While, in other, we optimized makespan, cost and reliability objectives simultaneously in order to incorporate the dynamic characteristics of grid resources. The performance of the approach has been compared with other acknowledged meta-heuristics like non-dominated sort genetic algorithm and multi-objective particle swarm optimization. The simulation analysis substantiates that the solutions obtained with $\varepsilon $ -FDPSO deliver better convergence and uniform spacing among the solutions keeping the computation overhead limited.  相似文献   

8.
Nowadays, many real-time operating systems discretize the time relying on a system time unit. To take this behavior into account, real-time scheduling algorithms must adopt a discrete-time model in which both timing requirements of tasks and their time allocations have to be integer multiples of the system time unit. That is, tasks cannot be executed for less than one time unit, which implies that they always have to achieve a minimum amount of work before they can be preempted. Assuming such a discrete-time model, the authors of Zhu et al. (Proceedings of the 24th IEEE international real-time systems symposium (RTSS 2003), 2003, J Parallel Distrib Comput 71(10):1411–1425, 2011) proposed an efficient “boundary fair” algorithm (named BF) and proved its optimality for the scheduling of periodic tasks while achieving full system utilization. However, BF cannot handle sporadic tasks due to their inherent irregular and unpredictable job release patterns. In this paper, we propose an optimal boundary-fair scheduling algorithm for sporadic tasks (named BF \(^2\) ), which follows the same principle as BF by making scheduling decisions only at the job arrival times and (expected) task deadlines. This new algorithm was implemented in Linux and we show through experiments conducted upon a multicore machine that BF \(^2\) outperforms the state-of-the-art discrete-time optimal scheduler (PD \(^2\) ), benefiting from much less scheduling overheads. Furthermore, it appears from these experimental results that BF \(^2\) is barely dependent on the length of the system time unit while PD \(^2\) —the only other existing solution for the scheduling of sporadic tasks in discrete-time systems—sees its number of preemptions, migrations and the time spent to take scheduling decisions increasing linearly when improving the time resolution of the system.  相似文献   

9.
Conventional studies on buffer-constrained flowshop scheduling problems have considered applications with a limitation on the number of jobs that are allowed in the intermediate storage buffer before flowing to the next machine. The study in Lin et al. (Comput. Oper. Res. 36(4):1158?C1175, 2008a) considered a two-machine flowshop problem with ??processing time-dependent?? buffer constraints for multimedia applications. A???passive?? prefetch model (the PP-problem), in which the download process is suspended unless the buffer is sufficient for keeping an incoming media object, was applied in Lin et al. (Comput. Oper. Res. 36(4):1158?C1175, 2008a). This study further considers an ??active?? prefetch model (the AP-problem) that exploits the unoccupied buffer space by advancing the download of the incoming object by a computed maximal duration that possibly does not cause a buffer overflow. We obtain new complexity results for both problems. This study also proposes a new lower bound which improves the branch and bound algorithm presented in Lin et al. (Comput. Oper. Res. 36(4):1158?C1175, 2008a). For the PP-problem, compared to the lower bounds developed in Lin et al. (Comput. Oper. Res. 36(4):1158?C1175, 2008a), on average, the results of the simulation experiments show that the proposed new lower bound cuts about 38% of the nodes and 32% of the execution time for searching the optimal solutions.  相似文献   

10.
Power efficiency is one of the main challenges in large-scale distributed systems such as datacenters, Grids, and Clouds. One can study the scheduling of applications in such large-scale distributed systems by representing applications as a set of precedence-constrained tasks and modeling them by a Directed Acyclic Graph. In this paper we address the problem of scheduling a set of tasks with precedence constraints on a heterogeneous set of Computing Resources (CRs) with the dual objective of minimizing the overall makespan and reducing the aggregate power consumption of CRs. Most of the related works in this area use Dynamic Voltage and Frequency Scaling (DVFS) approach to achieve these objectives. However, DVFS requires special hardware support that may not be available on all processors in large-scale distributed systems. In contrast, we propose a novel two-phase solution called PASTA that does not require any special hardware support. In its first phase, it uses a novel algorithm to select a subset of available CRs for running an application that can balance between lower overall power consumption of CRs and shorter makespan of application task schedules. In its second phase, it uses a low-complexity power-aware algorithm that creates a schedule for running application tasks on the selected CRs. We show that the overall time complexity of PASTA is $O(p.v^{2})$ where $p$ is the number of CRs and $v$ is the number of tasks. By using simulative experiments on real-world task graphs, we show that the makespan of schedules produced by PASTA are approximately 20 % longer than the ones produced by the well-known HEFT algorithm. However, the schedules produced by PASTA consume nearly 60 % less energy than those produced by HEFT. Empirical experiments on a physical test-bed confirm the power efficiency of PASTA in comparison with HEFT too.  相似文献   

11.
2-Opt is probably the most basic local search heuristic for the TSP. This heuristic achieves amazingly good results on “real world” Euclidean instances both with respect to running time and approximation ratio. There are numerous experimental studies on the performance of 2-Opt. However, the theoretical knowledge about this heuristic is still very limited. Not even its worst case running time on 2-dimensional Euclidean instances was known so far. We clarify this issue by presenting, for every $p\in\mathbb{N}$ , a family of L p instances on which 2-Opt can take an exponential number of steps. Previous probabilistic analyses were restricted to instances in which n points are placed uniformly at random in the unit square [0,1]2, where it was shown that the expected number of steps is bounded by $\tilde{O}(n^{10})$ for Euclidean instances. We consider a more advanced model of probabilistic instances in which the points can be placed independently according to general distributions on [0,1] d , for an arbitrary d≥2. In particular, we allow different distributions for different points. We study the expected number of local improvements in terms of the number n of points and the maximal density ? of the probability distributions. We show an upper bound on the expected length of any 2-Opt improvement path of $\tilde{O}(n^{4+1/3}\cdot\phi^{8/3})$ . When starting with an initial tour computed by an insertion heuristic, the upper bound on the expected number of steps improves even to $\tilde{O}(n^{4+1/3-1/d}\cdot\phi^{8/3})$ . If the distances are measured according to the Manhattan metric, then the expected number of steps is bounded by $\tilde{O}(n^{4-1/d}\cdot\phi)$ . In addition, we prove an upper bound of $O(\sqrt[d]{\phi})$ on the expected approximation factor with respect to all L p metrics. Let us remark that our probabilistic analysis covers as special cases the uniform input model with ?=1 and a smoothed analysis with Gaussian perturbations of standard deviation σ with ?~1/σ d .  相似文献   

12.
This paper presents new schedulability tests for preemptive global fixed-priority (FP) scheduling of sporadic tasks on identical multiprocessor platform. One of the main challenges in deriving a schedulability test for global FP scheduling is identifying the worst-case runtime behavior, i.e., the critical instant, at which the release of a job suffers the maximum interference from the jobs of its higher priority tasks. Unfortunately, the critical instant is not yet known for sporadic tasks under global FP scheduling. To overcome this limitation, pessimism is introduced during the schedulability analysis to safely approximate the worst-case. The endeavor in this paper is to reduce such pessimism by proposing three new schedulability tests for global FP scheduling. Another challenge for global FP scheduling is the problem of assigning the fixed priorities to the tasks because no efficient method to find the optimal priority ordering in such case is currently known. Each of the schedulability tests proposed in this paper can be used to determine the priority of each task based on Audsley’s approach. It is shown that the proposed tests not only theoretically dominate but also empirically perform better than the state-of-the-art schedulability test for global FP scheduling of sporadic tasks.  相似文献   

13.
This paper introduces the notion of distributed verification without preprocessing. It focuses on the Minimum-weight Spanning Tree (MST) verification problem and establishes tight upper and lower bounds for the time and message complexities of this problem. Specifically, we provide an MST verification algorithm that achieves simultaneously $\tilde{O}(m)$ messages and $\tilde{O}(\sqrt{n} + D)$ time, where m is the number of edges in the given graph G, n is the number of nodes, and D is G’s diameter. On the other hand, we show that any MST verification algorithm must send $\tilde{\varOmega}(m)$ messages and incur $\tilde{\varOmega}(\sqrt{n} + D)$ time in worst case. Our upper bound result appears to indicate that the verification of an MST may be easier than its construction, since for MST construction, both lower bounds of $\tilde{\varOmega}(m)$ messages and $\tilde{\varOmega}(\sqrt{n} + D)$ time hold, but at the moment there is no known distributed algorithm that constructs an MST and achieves simultaneously $\tilde{O}(m)$ messages and $\tilde{O}(\sqrt{n} + D)$ time. Specifically, the best known time-optimal algorithm (using ${\tilde{O}}(\sqrt {n} + D)$ time) requires O(m+n 3/2) messages, and the best known message-optimal algorithm (using ${\tilde{O}}(m)$ messages) requires O(n) time. On the other hand, our lower bound results indicate that the verification of an MST is not significantly easier than its construction.  相似文献   

14.
We give matching upper and lower bounds of \(\varTheta(\min(\frac{\log m}{\log \log m},\, n))\) for the individual step complexity of a wait-free m-valued adopt-commit object implemented using multi-writer registers for n anonymous processes. While the upper bound is deterministic, the lower bound holds for randomized adopt-commit objects as well. Our results are based on showing that adopt-commit objects are equivalent, up to small additive constants, to a simpler class of objects that we call conflict detectors. Our anonymous lower bound also applies to the individual step complexity of m-valued wait-free anonymous consensus, even for randomized algorithms with global coins against an oblivious adversary. The upper bound can be used to slightly improve the cost of randomized consensus against an oblivious adversary. For deterministic non-anonymous implementations of adopt-commit objects, we show a lower bound of \(\varOmega(\min(\frac{\log m}{\log \log m}, \frac{\sqrt{\log n}}{\log \log n}))\) and an upper bound of \(O(\min(\frac{\log m}{\log \log m},\, \log n))\) on the worst-case individual step complexity. For randomized non-anonymous implementations, we show that any execution contains at least one process whose steps exceed the deterministic lower bound.  相似文献   

15.
We prove new lower bounds for learning intersections of halfspaces, one of the most important concept classes in computational learning theory. Our main result is that any statistical-query algorithm for learning the intersection of $\sqrt{n}$ halfspaces in n dimensions must make $2^{\varOmega (\sqrt{n})}$ queries. This is the first non-trivial lower bound on the statistical query dimension for this concept class (the previous best lower bound was n Ω(log?n)). Our lower bound holds even for intersections of low-weight halfspaces. In the latter case, it is nearly tight. We also show that the intersection of two majorities (low-weight halfspaces) cannot be computed by a polynomial threshold function (PTF) with fewer than n Ω(log?n/log?log?n) monomials. This is the first super-polynomial lower bound on the PTF length of this concept class, and is nearly optimal. For intersections of k=ω(log?n) low-weight halfspaces, we improve our lower bound to $\min\{2^{\varOmega (\sqrt{n})},n^{\varOmega (k/\log k)}\},$ which too is nearly optimal. As a consequence, intersections of even two halfspaces are not computable by polynomial-weight PTFs, the most expressive class of functions known to be efficiently learnable via Jackson’s Harmonic Sieve algorithm. Finally, we report our progress on the weak learnability of intersections of halfspaces under the uniform distribution.  相似文献   

16.
We consider transactional memory contention management in the context of balanced workloads, where if a transaction is writing, the number of write operations it performs is a constant fraction of its total reads and writes. We explore the theoretical performance boundaries of contention management in balanced workloads from the worst-case perspective by presenting and analyzing two new polynomial time contention management algorithms. We analyze the performance of a contention management algorithm by comparison with an optimal offline contention management algorithm to provide a competitive ratio. The first algorithm Clairvoyant is $O(\sqrt{s})$ -competitive, where s is the number of shared resources. This algorithm depends on explicitly knowing the conflict graph at each time step of execution. The second algorithm Non-Clairvoyant is $O(\sqrt{s} \cdot \log n)$ -competitive, with high probability, which is only a O(log?n) factor worse, but does not require knowledge of the conflict graph, where n is the number of transactions. Both of these algorithms are greedy. We also prove that the performance of Clairvoyant is close to optimal, since there is no polynomial time contention management algorithm for the balanced transaction scheduling problem that is better than $O((\sqrt{s})^{1-\varepsilon})$ -competitive for any constant ε>0, unless NP?ZPP. To our knowledge, these results are significant improvements over the best previously known O(s) competitive ratio bound.  相似文献   

17.
We consider mechanisms without payments for the problem of scheduling unrelated machines. Specifically, we consider truthful in expectation randomized mechanisms under the assumption that a machine (player) is bound by its reports: when a machine lies and reports value $\tilde{t}_{ij}$ for a task instead of the actual one t ij , it will execute for time $\tilde{t}_{ij}$ if it gets the task (unless the declared value $\tilde{t}_{ij}$ is less than the actual value t ij , in which case, it will execute for time t ij ). Our main technical result is an optimal mechanism for one task and n players which has approximation ratio (n+1)/2. We also provide a matching lower bound, showing that no other truthful mechanism can achieve a better approximation ratio. This immediately gives an approximation ratio of (n+1)/2 and n(n+1)/2 for social cost and makespan minimization, respectively, for any number of tasks. We also study the price of anarchy of natural algorithms.  相似文献   

18.
In this paper we study gossip based information spreading with bounded message sizes. We use algebraic gossip to disseminate $k$ distinct messages to all $n$ nodes in a network. For arbitrary networks we provide a new upper bound for uniform algebraic gossip of $O((k+\log n + D)\varDelta )$ rounds with high probability, where $D$ and $\varDelta $ are the diameter and the maximum degree in the network, respectively. For many topologies and selections of $k$ this bound improves previous results, in particular, for graphs with a constant maximum degree it implies that uniform gossip is order optimal and the stopping time is $\varTheta (k + D)$ . To eliminate the factor of $\varDelta $ from the upper bound we propose a non-uniform gossip protocol, TAG, which is based on algebraic gossip and an arbitrary spanning tree protocol $\mathcal{S } $ . The stopping time of TAG is $O(k+\log n +d(\mathcal{S })+t(\mathcal{S }))$ , where $t(\mathcal{S })$ is the stopping time of the spanning tree protocol, and $d(\mathcal{S })$ is the diameter of the spanning tree. We provide two general cases in which this bound leads to an order optimal protocol. The first is for $k=\varOmega (n)$ , where, using a simple gossip broadcast protocol that creates a spanning tree in at most linear time, we show that TAG finishes after $\varTheta (n)$ rounds for any graph. The second uses a sophisticated, recent gossip protocol to build a fast spanning tree on graphs with large weak conductance. In turn, this leads to the optimally of TAG on these graphs for $k=\varOmega (\text{ polylog }(n))$ . The technique used in our proofs relies on queuing theory, which is an interesting approach that can be useful in future gossip analysis.  相似文献   

19.
We consider the problem of optimal real-time scheduling of periodic and sporadic tasks on identical multiprocessors. A number of recent papers have used the notions of fluid scheduling and deadline partitioning to guarantee optimality and improve performance. This article develops a unifying theory with the DP-Fair scheduling policy and examines how it overcomes problems faced by greedy scheduling algorithms. In addition, we present DP-Wrap, a simple DP-Fair scheduling algorithm which serves as a least common ancestor to other recent algorithms. The DP-Fair scheduling policy is extended to address the problem of scheduling sporadic task sets with arbitrary deadlines.  相似文献   

20.
This paper concerns project scheduling under uncertainty. The project is modeled as an activity-on-node network, each activity having an uncertain duration represented by an interval. The problem of computing the minimum float of each activity over all duration scenarios is addressed. For solving this NP-hard problem, Dubois et al. (in J. Intell. Manuf. 16, 407–422, 2005) and Fortin et al. (in J. Sched. doi:10.1007/s10951-010-0163-3, 2010) have proposed an algorithm based on path enumeration. In this paper, new structural properties of optimal solutions are established and used for deriving a lower bound and designing an efficient branch-and-bound procedure. Two mixed-integer programming formulations are also proposed. Computational experimentations have been conducted on a large variety of randomly generated problem instances. The results show that the proposed branch-and-bound procedure is very fast and consistently outperforms the MIP formulations and the path enumeration algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号