共查询到20条相似文献,搜索用时 938 毫秒
1.
Speed Scaling of Tasks with Precedence Constraints 总被引:1,自引:0,他引:1
We consider the problem of speed scaling to conserve energy in a multiprocessor setting where there are precedence constraints
between tasks, and where the performance measure is the makespan. That is, we consider an energy bounded version of the classic
problem Pm
|
prec
|
C
max . We extend the standard 3-field notation and denote this problem as Sm
|
prec, energy
|
C
max . We show that, without loss of generality, one need only consider constant power schedules. We then show how to reduce this
problem to the problem Qm
|
prec
|
C
max to obtain a poly-log(m)-approximation algorithm.
A preliminary version of this paper appears in Proceedings of the 3rd Workshop on Approximation and Online Algorithms (WAOA
2005).
Research of K. Pruhs supported in part by NSF grants CCR-0098752, ANI-0123705, CNS-0325353, CCF-0448196, CCF-0514058, and
IIS-0534531.
Research of R. van Stee supported by Alexander von Humboldt-Stiftung. 相似文献
2.
We consider a single-source network design problem from a game-theoretic perspective. Gupta, Kumar and Roughgarden (Proc.
35th Annual ACM STOC, pp. 365–372, 2003) developed a simple method for a single-source rent-or-buy problem that also yields the best-known approximation ratio for
the problem. We show how to use a variant of this method to develop an approximately budget-balanced and group strategyproof
cost-sharing method for the problem.
The novelty of our approach stems from our obtaining the cost-sharing methods for the rent-or-buy problem by carefully combining
cost-shares for the simpler Steiner tree problem. Our algorithm is conceptually simpler than the previous such cost-sharing
method due to Pál and Tardos (Proc. 44th Annual FOCS, pp. 584–593, 2003), and improves the previously-known approximation factor of 15 to 4.6.
A preliminary version of this work appears in the Proc. International Workshop on Approximation Algorithms for Combinatorial Optimization Problems, 2004. This research was done in part during the IMA Workshop on Network Management and Design at the University of Minnesota, April 2003.
A. Gupta supported in part by an NSF CAREER award CCF-0448095, and by an Alfred P. Sloan Fellowship.
A. Srinivasan supported in part by the National Science Foundation under Grant No. 0208005 and ITR Award CNS-0426683.
Research of é. Tardos supported in part by ONR grant N00014-98-1-0589, and NSF grants CCR-0311333 and CCR-0325453. 相似文献
3.
Yijie Han 《Algorithmica》2008,51(4):428-434
We present an O(n
3(log log n/log n)5/4) time algorithm for all pairs shortest paths. This algorithm improves on the best previous result of O(n
3/log n) time.
Research supported in part by NSF grant 0310245. 相似文献
4.
We consider a model of game-theoretic network design initially studied by Anshelevich et al. (Proceedings of the 45th Annual
Symposium on Foundations of Computer Science (FOCS), pp. 295–304, 2004), where selfish players select paths in a network to minimize their cost, which is prescribed by Shapley cost shares. If
all players are identical, the cost share incurred by a player for an edge in its path is the fixed cost of the edge divided
by the number of players using it. In this special case, Anshelevich et al. (Proceedings of the 45th Annual Symposium on Foundations
of Computer Science (FOCS), pp. 295–304, 2004) proved that pure-strategy Nash equilibria always exist and that the price of stability—the ratio between the cost of the
best Nash equilibrium and that of an optimal solution—is Θ(log k), where k is the number of players. Little was known about the existence of equilibria or the price of stability in the general weighted version of the game. Here, each player i has a weight w
i
≥1, and its cost share of an edge in its path equals w
i
times the edge cost, divided by the total weight of the players using the edge.
This paper presents the first general results on weighted Shapley network design games. First, we give a simple example with
no pure-strategy Nash equilibrium. This motivates considering the price of stability with respect to α-approximate Nash equilibria—outcomes from which no player can decrease its cost by more than an α multiplicative factor. Our first positive result is that O(log w
max )-approximate Nash equilibria exist in all weighted Shapley network design games, where w
max is the maximum player weight. More generally, we establish the following trade-off between the two objectives of good stability
and low cost: for every α=Ω(log w
max ), the price of stability with respect to O(α)-approximate Nash equilibria is O((log W)/α), where W is the sum of the players’ weights. In particular, there is always an O(log W)-approximate Nash equilibrium with cost within a constant factor of optimal.
Finally, we show that this trade-off curve is nearly optimal: we construct a family of networks without o(log w
max / log log w
max )-approximate Nash equilibria, and show that for all α=Ω(log w
max /log log w
max ), achieving a price of stability of O(log W/α) requires relaxing equilibrium constraints by an Ω(α) factor.
Research of H.-L. Chen supported in part by NSF Award 0323766.
Research of T. Roughgarden supported in part by ONR grant N00014-04-1-0725, DARPA grant W911NF-04-9-0001, and an NSF CAREER
Award. 相似文献
5.
We study dynamic routing in store-and-forward packet networks where each network link has bounded buffer capacity for receiving
incoming packets and is capable of transmitting a fixed number of packets per unit of time. At any moment in time, packets
are injected at various network nodes with each packet specifying its destination node. The goal is to maximize the throughput, defined as the number of packets delivered to their destinations.
In this paper, we make some progress on throughput maximization in various network topologies. Let n and m denote the number of nodes and links in the network, respectively. For line networks, we show that Nearest-to-Go (NTG), a
natural distributed greedy algorithm, is
-competitive, essentially matching a known
lower bound on the performance of any greedy algorithm. We also show that if we allow the online routing algorithm to make centralized decisions, there is a randomized
polylog(n)-competitive algorithm for line networks as well as for rooted tree networks, where each packet is destined for the root
of the tree. For grid graphs, we show that NTG has a competitive ratio of
while no greedy algorithm can achieve a ratio better than
. Finally, for arbitrary network topologies, we show that NTG is
-competitive, improving upon an earlier bound of O(mn).
An extended abstract appeared in the Proceedings of the 8th Workshop on Approximation Algorithms for Combinatorial Optimization Problems, APPROX 2005, Berkeley, CA, USA, pp. 1–13, Lecture Notes in Computer Science, vol. 1741, Springer, Berlin.
S. Angelov is supported in part by NSF Career Award CCR-0093117, NSF Award ITR 0205456 and NIGMS Award 1-P20-GM-6912-1.
S. Khanna is supported in part by an NSF Career Award CCR-0093117, NSF Award CCF-0429836, and a US-Israel Binational Science
Foundation Grant.
K. Kunal is supported in part by an NSF Career Award CCR-0093117 and NSF Award CCF-0429836. 相似文献
6.
We analyze the List scheduling algorithm for the problem of minimizing makespan using a worst-average-case or wac analysis
technique, previously used by Kenyon for analyzing the Best Fit bin packing algorithm. We show that List’s worst-average-case
or wac ratio asymptotically approaches 2, as m→∞. Thus, List’s worst-case behavior is not overly dependent on the order of job arrivals.
C.J. Osborn is supported in part by NSF grant CCR 0105283. This work was done while the author was an undergraduate student
at Michigan State University.
E. Torng is supported in part by NSF grant CCR 0105283. 相似文献
7.
This paper presents scheduling algorithms for procrastinators, where the speed that a procrastinator executes a job increases
as the due date approaches. We give optimal off-line scheduling policies for linearly increasing speed functions. We then
explain the computational/numerical issues involved in implementing this policy. We next explore the online setting, showing
that there exist adversaries that force any online scheduling policy to miss due dates. This impossibility result motivates
the problem of minimizing the maximum interval stretch of any job; the interval stretch of a job is the job’s flow time divided by the job’s due date minus release time. We show
that several common scheduling strategies, including the “hit-the-highest-nail” strategy beloved by procrastinators, have
arbitrarily large maximum interval stretch. Then we give the “thrashing” scheduling policy and show that it is a Θ(1) approximation algorithm for the maximum interval stretch.
Research of M.A. Bender was supported in part by NSF Grants CCR-0208670, CCF-0621439/0621425, CCF-0540897/05414009, CCF-0634793/0632838,
and CNS-0627645. 相似文献
8.
The data migration problem is to compute an efficient plan for moving data stored on devices in a network from one configuration to another.
It is modeled by a transfer graph, where vertices represent the storage devices, and edges represent data transfers required
between pairs of devices. Each vertex has a non-negative weight, and each edge has a processing time. A vertex completes when
all the edges incident on it complete; the constraint is that two edges incident on the same vertex cannot be processed simultaneously.
The objective is to minimize the sum of weighted completion times of all vertices. Kim (J. Algorithms 55, 42–57, 2005) gave an LP-rounding 3-approximation algorithm when edges have unit processing times. We give a more efficient primal-dual
algorithm that achieves the same approximation guarantee. When edges have arbitrary processing times we give a primal-dual
5.83-approximation algorithm. We also study a variant of the open shop scheduling problem. This is a special case of the data
migration problem in which the transfer graph is bipartite and the objective is to minimize the sum of completion times of
edges. We present a simple algorithm that achieves an approximation ratio of
, thus improving the 1.796-approximation given by Gandhi et al. (ACM Trans. Algorithms 2(1), 116–129, 2006). We show that the analysis of our algorithm is almost tight.
A preliminary version of the paper appeared in the Proceedings of the 9th International Workshop on Approximation Algorithms
for Combinatorial Optimization Problems, APPROX 2006.
Research of R. Gandhi partially supported by Rutgers University Research Council Grant.
Research of J. Mestre done at the University of Maryland; supported by NSF Awards CCR-0113192 and CCF-0430650, and the University
of Maryland Dean’s Dissertation Fellowship. 相似文献
9.
Julián Mestre 《Algorithmica》2009,55(1):227-239
We study the partial vertex cover problem. Given a graph G=(V,E), a weight function w:V→R
+, and an integer s, our goal is to cover all but s edges, by picking a set of vertices with minimum weight. The problem is clearly NP-hard as it generalizes the well-known
vertex cover problem. We provide a primal-dual 2-approximation algorithm which runs in O(nlog n+m) time. This represents an improvement in running time from the previously known fastest algorithm.
Our technique can also be used to get a 2-approximation for a more general version of the problem. In the partial capacitated vertex cover problem each vertex u comes with a capacity k
u
. A solution consists of a function x:V→ℕ0 and an orientation of all but s edges, such that the number of edges oriented toward vertex u is at most x
u
k
u
. Our objective is to find a cover that minimizes ∑
v∈V
x
v
w
v
. This is the first 2-approximation for the problem and also runs in O(nlog n+m) time.
Research supported by NSF Awards CCR 0113192 and CCF 0430650, and the University of Maryland Dean’s Dissertation Fellowship. 相似文献
10.
In this paper, the single‐machine scheduling problem 1∣prec∣fmax is considered. It is one of the most general scheduling problems for which an efficient, polynomial algorithm has been developed. It is always possible to calculate quickly one optimal solution (a sequence of jobs) in that problem. However, the set of all optimal solutions may contain a lot of other sequences, so it is important to give a full characterization of that set. This paper consists of two parts. In the first part, some sufficient and necessary conditions of optimality of a given solution to the problem 1∣prec∣fmax are proved. In the second part, an application of these conditions to the sensitivity analysis is presented. 相似文献
11.
Patchrawat “Patch” Uthaisombut 《Algorithmica》2008,50(3):312-328
It is well known that the Earliest-Deadline-First (EDF) and the Least-Laxity-First (LLF) algorithms are optimal algorithms
for the problem of preemptively scheduling jobs that arrive over time on a single machine to minimize the maximum lateness
(1|r
j
,pmtn|L
max ). It was not previously known what other online algorithms are optimal for this problem. As this problem is fundamental in
machine scheduling, it deserves a thorough investigation. In this paper, the concept of compound laxity is introduced, and a complete characterization of all optimal online algorithms for this problem is derived. 相似文献
12.
On approximating the longest path in a graph 总被引:6,自引:0,他引:6
We consider the problem of approximating the longest path in undirected graphs. In an attempt to pin down the best achievable
performance ratio of an approximation algorithm for this problem, we present both positive and negative results. First, a
simple greedy algorithm is shown to find long paths in dense graphs. We then consider the problem of finding paths in graphs
that are guaranteed to have extremely long paths. We devise an algorithm that finds paths of a logarithmic length in Hamiltonian
graphs. This algorithm works for a much larger class of graphs (weakly Hamiltonian), where the result is the best possible.
Since the hard case appears to be that of sparse graphs, we also consider sparse random graphs. Here we show that a relatively
long path can be obtained, thereby partially answering an open problem of Broderet al.
To explain the difficulty of obtaining better approximations, we also prove hardness results. We show that, for any ε<1, the
problem of finding a path of lengthn-n
ε in ann-vertex Hamiltonian graph isNP-hard. We then show that no polynomial-time algorithm can find a constant factor approximation to the longest-path problem
unlessP=NP. We conjecture that the result can be strengthened to say that, for some constant δ>0, finding an approximation of ration
δ is alsoNP-hard. As evidence toward this conjecture, we show that if any polynomial-time algorithm can approximate the longest path
to a ratio of
, for any ε>0, thenNP has a quasi-polynomial deterministic time simulation. The hardness results apply even to the special case where the input
consists of bounded degree graphs.
D. Karger was supported by an NSF Graduate Fellowship, NSF Grant CCR-9010517, and grants from the Mitsubishi Corporation and
OTL. R. Motwani was supported by an Alfred P. Sloan Research Fellowship, an IBM Faculty Development Award, grants from Mitsubishi
and OTL, NSF Grant CCR-9010517, and NSF Young Investigator Award CCR-9357849, with matching funds from IBM, the Schlumberger
Foundation, the Shell Foundation, and the Xerox Corporation, G. D. S. Ramkumar was supported by a grant from the Toshiba Corporation.
Communicated by M. X. Goemans. 相似文献
13.
Stanley P. Y. Fung Feifeng Zheng Wun-Tat Chan Francis Y. L. Chin Chung Keung Poon Prudence W. H. Wong 《Journal of Scheduling》2008,11(4):299-308
We study an on-line broadcast scheduling problem in which requests have deadlines, and the objective is to maximize the weighted
throughput, i.e., the weighted total length of the satisfied requests. For the case where all requested pages have the same
length, we present an online deterministic algorithm named BAR and prove that it is 4.56-competitive. This improves the previous
algorithm of (Kim, J.-H., Chwa, K.-Y. in Theor. Comput. Sci. 325(3):479–488, 2004) which is shown to be 5-competitive by (Chan, W.-T., et al. in Lecture Notes in Computer Science, vol. 3106, pp. 210–218,
2004). In the case that pages may have different lengths, we give a (
)-competitive algorithm where Δ is the ratio of maximum to minimum page lengths. This improves the (4Δ+3)-competitive algorithm
of (Chan, W.-T., et al. in Lecture Notes in Computer Science, vol. 3106, pp. 210–218, 2004). We also prove an almost matching lower bound of Ω(Δ/log Δ). Furthermore, for small values of Δ we give better lower bounds.
The work described in this paper was fully supported by grants from the Research Grants Council of the Hong Kong SAR, China
[CityU 1198/03E, HKU 7142/03E, HKU 5172/03E], an NSF Grant of China [No. 10371094], and a Nuffield Foundation Grant of UK
[NAL/01004/G]. 相似文献
14.
In this paper we study the external memory planar point enclosure problem: Given N axis-parallel rectangles in the plane, construct a data structure on disk (an index) such that all K rectangles containing a query point can be reported I/O-efficiently. This problem has important applications in e.g. spatial
and temporal databases, and is dual to the important and well-studied orthogonal range searching problem. Surprisingly, despite
the fact that the problem can be solved optimally in internal memory with linear space and O(log N+K) query time, we show that one cannot construct a linear sized external memory point enclosure data structure that can be
used to answer a query in O(log
B
N+K/B) I/Os, where B is the disk block size. To obtain this bound, Ω(N/B
1−ε
) disk blocks are needed for some constant ε>0. With linear space, the best obtainable query bound is O(log 2
N+K/B) if a linear output term O(K/B) is desired. To show this we prove a general lower bound on the tradeoff between the size of the data structure and its query
cost. We also develop a family of structures with matching space and query bounds.
An extended abstract of this paper appeared in Proceedings of the 12th European Symposium on Algorithms (ESA’04), Bergen, Norway, September 2004, pp. 40–52.
L. Arge’s research was supported in part by the National Science Foundation through RI grant EIA–9972879, CAREER grant CCR–9984099,
ITR grant EIA–0112849, and U.S.-Germany Cooperative Research Program grant INT–0129182, as well as by the US Army Research
Office through grant W911NF-04-01-0278, by an Ole Roemer Scholarship from the Danish National Science Research Council, a
NABIIT grant from the Danish Strategic Research Council and by the Danish National Research Foundation.
V. Samoladas’ research was supported in part by a grant co-funded by the European Social Fund and National Resources-EPEAEK
II-PYTHAGORAS.
K. Yi’s research was supported in part by the National Science Foundation through ITR grant EIA–0112849, U.S.-Germany Cooperative
Research Program grant INT–0129182, and Hong Kong Direct Allocation Grant (DAG07/08). 相似文献
15.
We present two new algorithms, Arc Length and Peer Count, for choosing a peer uniformly at random from the set of all peers in Chord (Proceedings of the ACM SIGCOMM 2001 Technical
Conference, 2001). We show analytically that, in expectation, both algorithms have latency O(log n) and send O(log n) messages. Moreover, we show empirically that the average latency and message cost of Arc Length is 10.01log n and that the average latency and message cost of Peer Count is 20.02log n. To the best of our knowledge, these two algorithms are the first fully distributed algorithms for choosing a peer uniformly
at random from the set of all peers in a Distributed Hash Table (DHT). Our motivation for studying this problem is threefold:
to enable data collection by statistically rigorous sampling methods; to provide support for randomized, distributed algorithms
over peer-to-peer networks; and to support the creation and maintenance of random links, and thereby offer a simple means
of improving fault-tolerance.
Research of S. Lewis, J. Saia and M. Young was partially supported by NSF grant CCR-0313160 and Sandia University Research
Program grant No. 191445. 相似文献
16.
We show efficient algorithms for edge-coloring planar graphs. Our main result is a linear-time algorithm for coloring planar
graphs with maximum degree Δ with max {Δ,9} colors. Thus the coloring is optimal for graphs with maximum degree Δ≥9. Moreover for Δ=4,5,6 we give linear-time algorithms that use Δ+2 colors. These results improve over the algorithms of Chrobak and Yung (J. Algorithms 10:35–51, 1989) and of Chrobak and Nishizeki (J. Algorithms 11:102–116, 1990) which color planar graphs using max {Δ,19} colors in linear time or using max {Δ,9} colors in
time.
R. Cole supported in part by NSF grants CCR0105678 and CCF0515127 and IDM0414763.
Ł. Kowalik supported in part by KBN grant 4T11C04425. Part of this work was done while Ł. Kowalik was staying at the Max Planck
Institute in Saarbruecken, Germany. 相似文献
17.
Suffix trees and suffix arrays are fundamental full-text index data structures to solve problems occurring in string processing.
Since suffix trees and suffix arrays have different capabilities, some problems are solved more efficiently using suffix trees
and others are solved more efficiently using suffix arrays. We consider efficient index data structures with the capabilities
of both suffix trees and suffix arrays without requiring much space. When the size of an alphabet is small, enhanced suffix
arrays are such index data structures. However, when the size of an alphabet is large, enhanced suffix arrays lose the power
of suffix trees. Pattern searching in an enhanced suffix array takes O(m|Σ|) time while pattern searching in a suffix tree takes O(mlog |Σ|) time where m is the length of a pattern and Σ is an alphabet.
In this paper, we present linearized suffix trees which are efficient index data structures with the capabilities of both suffix trees and suffix arrays even when the size
of an alphabet is large. A linearized suffix tree has all the functionalities of the enhanced suffix array and supports the
pattern search in O(mlog |Σ|) time. In a different point of view, it can be considered a practical implementation of the suffix tree supporting
O(mlog |Σ|)-time pattern search.
In addition, we also present two efficient algorithms for computing suffix links on the enhanced suffix array and the linearized
suffix tree. These are the first algorithms that run in O(n) time without using the range minima query. Our experimental results show that our algorithms are faster than the previous
algorithms. 相似文献
18.
The discernibility matrix is one of the most important approaches to computing positive region, reduct, core and value reduct in rough sets. The subject of this paper is to develop a parallel approach of it, called "tree expression". Its computational complexity for positive region and reduct is O(m^2 × n) instead of O(m × n^2) in discernibility-matrix-based approach, and is not over O(n^2) for other concepts in rough sets, where rn and n are the numbers of attributes and objects respectively in a given dataset (also called an "information system" in rough sets). This approach suits information systems with n ≥ m and containing over one million objects. 相似文献
19.
Ideal preemptive schedules on two processors 总被引:2,自引:0,他引:2
An ideal schedule minimizes both makespan and total flow time. It is known that the Coffman-Graham algorithm [Acta Informatica 1, 200-213, 1972] solves in polynomial time the problem of finding an ideal nonpreemptive schedule of unit-execution-time jobs with equal release dates and arbitrary precedence constraints on two identical parallel processors. This paper presents an extension of the algorithm that solves in polynomial time the preemptive counterpart of this problem. The complexity status of the preemptive problem of minimizing just the total flow time has been open.Received: 2 May 2003, J. Sethuraman: Research supported by an NSF CAREER Award DMI-0093981 and an IBM Faculty Partnership Award. 相似文献
20.
This paper presents a modified Branch and Bound (B&B) algorithm called, the Branch, Bound, and Remember (BB&R) algorithm,
which uses the Distributed Best First Search (DBFS) exploration strategy for solving the 1|r
i
|∑t
i
scheduling problem, a single machine scheduling problem where the objective is to find a schedule with the minimum total
tardiness. Memory-based dominance strategies are incorporated into the BB&R algorithm. In addition, a modified memory-based
dynamic programming algorithm is also introduced to efficiently compute lower bounds for the 1|r
i
|∑t
i
scheduling problem. Computational results are reported, which shows that the BB&R algorithm with the DBFS exploration strategy
outperforms the best known algorithms reported in the literature. 相似文献