首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An instance of the path hitting problem consists of two families of paths, and ℋ, in a common undirected graph, where each path in ℋ is associated with a non-negative cost. We refer to and ℋ as the sets of demand and hitting paths, respectively. When p∈ℋ and share at least one mutual edge, we say that p hits q. The objective is to find a minimum cost subset of ℋ whose members collectively hit those of . In this paper we provide constant factor approximation algorithms for path hitting, confined to instances in which the underlying graph is a tree, a spider, or a star. Although such restricted settings may appear to be very simple, we demonstrate that they still capture some of the most basic covering problems in graphs. Our approach combines several novel ideas: We extend the algorithm of Garg, Vazirani and Yannakakis (Algorithmica, 18:3–20, 1997) for approximate multicuts and multicommodity flows in trees to prove new integrality properties; we present a reduction that involves multiple calls to this extended algorithm; and we introduce a polynomial-time solvable variant of the edge cover problem, which may be of independent interest. An extended abstract of this paper appeared in Proceedings of the 14th Annual European Symposium on Algorithms, 2006. This work is part of D. Segev’s Ph.D. thesis prepared at Tel-Aviv University under the supervision of Prof. Refael Hassin.  相似文献   

2.
It is proved that “FIFO” worksharing protocols provide asymptotically optimal solutions to two problems related to sharing large collections of independent tasks in a heterogeneous network of workstations (HNOW) . In the , one seeks to accomplish as much work as possible on during a prespecified fixed period of L time units. In the , one seeks to complete W units of work by “renting” for as short a time as necessary. The worksharing protocols we study are crafted within an architectural model that characterizes via parameters that measure ’s workstations’ computational and communicational powers. All valid protocols are self-scheduling, in the sense that they determine completely both an amount of work to allocate to each of ’s workstations and a schedule for all related interworkstation communications. The schedules provide either a value for W given L, or a value for L given W, hence solve both of the motivating problems. A protocol observes a FIFO regimen if it has ’s workstations finish their assigned work, and return their results, in the same order in which they are supplied with their workloads. The proven optimality of FIFO protocols resides in the fact that they accomplish at least as much work as any other protocol during all sufficiently long worksharing episodes, and that they complete sufficiently large given collections of tasks at least as fast as any other protocol. Simulation experiments illustrate that the superiority of FIFO protocols is often observed during worksharing episodes of only a few minutes’ duration. A portion of this research was presented at the 15th ACM Symp. on Parallelism in Algorithms and Architectures (2003).  相似文献   

3.
We present an algorithm that takes I/Os (sort(N)=Θ((N/(DB))log  M/B (N/B)) is the number of I/Os it takes to sort N data items) to compute a tree decomposition of width at most k, for any graph G of treewidth at most k and size N, where k is a constant. Given such a tree decomposition, we use a dynamic programming framework to solve a wide variety of problems on G in I/Os, including the single-source shortest path problem and a number of problems that are NP-hard on general graphs. The tree decomposition can also be used to obtain an optimal separator decomposition of G. We use such a decomposition to perform depth-first search in G in  I/Os. As important tools that are used in the tree decomposition algorithm, we introduce flippable DAGs and present an algorithm that computes a perfect elimination ordering of a k-tree in I/Os. The second contribution of our paper, which is of independent interest, is a general and simple framework for obtaining I/O-efficient algorithms for a number of graph problems that can be solved using greedy algorithms in internal memory. We apply this framework in order to obtain an improved algorithm for finding a maximal matching and the first deterministic I/O-efficient algorithm for finding a maximal independent set of an arbitrary graph. Both algorithms take I/Os. The maximal matching algorithm is used in the tree decomposition algorithm. An abstract of this paper was presented at the 12th Annual ACM-SIAM Symposium on Discrete Algorithms, Proceedings, pp. 89–90, 2001. Research of A. Maheshwari supported by NSERC. Part of this work was done while the second author was a Ph.D. student at the School of Computer Science of Carleton University.  相似文献   

4.
A traveling salesman game is a cooperative game . Here N, the set of players, is the set of cities (or the vertices of the complete graph) and c D is the characteristic function where D is the underlying cost matrix. For all SN, define c D (S) to be the cost of a minimum cost Hamiltonian tour through the vertices of S∪{0} where is called as the home city. Define Core as the core of a traveling salesman game . Okamoto (Discrete Appl. Math. 138:349–369, [2004]) conjectured that for the traveling salesman game with D satisfying triangle inequality, the problem of testing whether Core is empty or not is -hard. We prove that this conjecture is true. This result directly implies the -hardness for the general case when D is asymmetric. We also study approximately fair cost allocations for these games. For this, we introduce the cycle cover games and show that the core of a cycle cover game is non-empty by finding a fair cost allocation vector in polynomial time. For a traveling salesman game, let and SN, x(S)≤εc D (S)} be an ε-approximate core, for a given ε>1. By viewing an approximate fair cost allocation vector for this game as a sum of exact fair cost allocation vectors of several related cycle cover games, we provide a polynomial time algorithm demonstrating the non-emptiness of the log 2(|N|−1)-approximate core by exhibiting a vector in this approximate core for the asymmetric traveling salesman game. We improve it further by finding a -approximate core in polynomial time for some constant c. We also show that there exists an ε 0>1 such that it is -hard to decide whether ε 0-Core is empty or not. A preliminary version of the paper appeared in the third Workshop on Approximation and Online Algorithms (WAOA), 2005.  相似文献   

5.
We study an online job scheduling problem arising in networks with aggregated links. The goal is to schedule n jobs, divided into k disjoint chains, on m identical machines, without preemption, so that the jobs within each chain complete in the order of release times and the maximum flow time is minimized. We present a deterministic online algorithm with competitive ratio , and show a matching lower bound, even for randomized algorithms. The performance bound for we derive in the paper is, in fact, more subtle than a standard competitive ratio bound, and it shows that in overload conditions (when many jobs are released in a short amount of time), ’s performance is close to the optimum. We also show how to compute an offline solution efficiently for k=1, and that minimizing the maximum flow time for k,m≥2 is -hard. As by-products of our method, we obtain two offline polynomial-time algorithms for minimizing makespan: an optimal algorithm for k=1, and a 2-approximation algorithm for any k. W. Jawor and M. Chrobak supported by NSF grants OISE-0340752 and CCR-0208856. Work of C. Dürr conducted while being affiliated with the Laboratoire de Recherche en Informatique, Université Paris-Sud, 91405 Orsay. Supported by the CNRS/NSF grant 17171 and ANR Alpage.  相似文献   

6.
A caterpillar is a tree in which all vertices of degree three or more lie on one path, called the backbone. We present a polynomial time algorithm that produces a linear arrangement of the vertices of a caterpillar with bandwidth at most O(log n/log log n) times the local density of the caterpillar, where the local density is a well known lower bound on the bandwidth. This result is best possible in the sense that there are caterpillars whose bandwidth is larger than their local density by a factor of Ω(log n/log log n). The previous best approximation ratio for the bandwidth of caterpillars was O(log n). We show that any further improvement in the approximation ratio would require using linear arrangements that do not respect the order of the vertices of the backbone. We also show how to obtain a (1+ε) approximation for the bandwidth of caterpillars in time . This result generalizes to trees, planar graphs, and any family of graphs with treewidth .  相似文献   

7.
Given a multivariate polynomial P(X 1,…,X n ) over a finite field , let N(P) denote the number of roots over . The modular root counting problem is given a modulus r, to determine N r (P)=N(P)mod r. We study the complexity of computing N r (P), when the polynomial is given as a sum of monomials. We give an efficient algorithm to compute N r (P) when the modulus r is a power of the characteristic of the field. We show that for all other moduli, the problem of computing N r (P) is -hard. We present some hardness results which imply that our algorithm is essentially optimal for prime fields. We show an equivalence between maximum-likelihood decoding for Reed-Solomon codes and a root-finding problem for symmetric polynomials. P. Gopalan’s and R.J Lipton’s research was supported by NSF grant CCR-3606B64. V. Guruswami’s research was supported in part by NSF grant CCF-0343672 and a Sloan Research Fellowship.  相似文献   

8.
Radio networks model wireless data communication when the bandwidth is limited to one wave frequency. The key restriction of such networks is mutual interference of packets arriving simultaneously at a node. The many-to-many (m2m) communication primitive involves p participant nodes from among n nodes in the network, where the distance between any pair of participants is at most d. The task is to have all the participants get to know all the input messages. We consider three cases of the m2m communication problem. In the ad-hoc case, each participant knows only its name and the values of n, p and d. In the partially centralized case, each participant knows the topology of the network and the values of p and d, but does not know the names of the other participants. In the centralized case, each participant knows the topology of the network and the names of all the participants. For the centralized m2m problem, we give deterministic protocols, for both undirected and directed networks, working in time, which is provably optimal. For the partially centralized m2m problem, we give a randomized protocol for undirected networks working in time with high probability (whp), and we show that any deterministic protocol requires time. For the ad-hoc m2m problem, we develop a randomized protocol for undirected networks that works in time whp. We show two lower bounds for the ad-hoc m2m problem. One lower bound states that any randomized protocol for the m2m ad hoc problem requires expected time. Another lower bound states that for any deterministic protocol for the m2m ad hoc problem, there is a network on which the protocol requires time when np(n)=Ω(n) and d>1, and that it requires Ω(n) time when np(n)=o(n). The results of this paper appeared in a preliminary form in “On many-to-many communication in packet radio networks” in Proceedings of the 10th Conference on Principles of Distributed Systems (OPODIS), Bordeaux, France, 2006, Lecture Notes in Computer Science 4305, Springer, Heidelberg, pp. 258–272. The work of B.S. Chlebus was supported by NSF Grant 0310503.  相似文献   

9.
We consider the problem of finding a stable matching of maximum size when both ties and unacceptable partners are allowed in preference lists. This problem is NP-hard and the current best known approximation algorithm achieves the approximation ratio 2−c(log N)/N, where c is an arbitrary positive constant and N is the number of men in an input. In this paper, we improve the ratio to , where c is an arbitrary constant that satisfies . A preliminary version of this paper was presented at the 16th Annual International Symposium on Algorithms and Computation, ISAAC 2005.  相似文献   

10.
We present a method to speed up the dynamic program algorithms used for solving the HMM decoding and training problems for discrete time-independent HMMs. We discuss the application of our method to Viterbi’s decoding and training algorithms (IEEE Trans. Inform. Theory IT-13:260–269, 1967), as well as to the forward-backward and Baum-Welch (Inequalities 3:1–8, 1972) algorithms. Our approach is based on identifying repeated substrings in the observed input sequence. Initially, we show how to exploit repetitions of all sufficiently small substrings (this is similar to the Four Russians method). Then, we describe four algorithms based alternatively on run length encoding (RLE), Lempel-Ziv (LZ78) parsing, grammar-based compression (SLP), and byte pair encoding (BPE). Compared to Viterbi’s algorithm, we achieve speedups of Θ(log n) using the Four Russians method, using RLE, using LZ78, using SLP, and Ω(r) using BPE, where k is the number of hidden states, n is the length of the observed sequence and r is its compression ratio (under each compression scheme). Our experimental results demonstrate that our new algorithms are indeed faster in practice. We also discuss a parallel implementation of our algorithms. A preliminary version of this paper appeared in Proc. 18th Annual Symposium on Combinatorial Pattern Matching (CPM), pp. 4–15, 2007. Y. Lifshits’ research was supported by the Center for the Mathematics of Information and the Lee Center for Advanced Networking. S. Mozes’ work conducted while visiting MIT.  相似文献   

11.
In this paper, we study the merging of two sorted arrays and on EREW PRAM with two restrictions: (1) The elements of two arrays are taken from the integer range [1,n], where n=Max(n 1,n 2). (2) The elements are taken from either uniform distribution or non-uniform distribution such that , for 1≤ip (number of processors). We give a new optimal deterministic algorithm runs in time using p processors on EREW PRAM. For ; the running time of the algorithm is O(log (g) n) which is faster than the previous results, where log (g) n=log log (g−1) n for g>1 and log (1) n=log n. We also extend the domain of input data to [1,n k ], where k is a constant.
Hazem M. BahigEmail:
  相似文献   

12.
Goal separation is often a fruitful approach when solving complex problems. It provides a way to focus on relevant aspects in a stepwise fashion and hence bound the problem solving scope along a specific direction at any point. This work applies goal separation to the problem of synthesizing robust schedules. The problem is addressed by separating the phase of problem solution, which may pursue a standard optimization criterion (e.g., minimal makespan), from a subsequent phase of solution robustification in which a more flexible set of solutions is obtained and compactly represented through a temporal graph, called a Partial Order Schedule ( ). The key advantage of a is that it provides the capability to promptly respond to temporal changes (e.g., activity duration changes or activity start-time delays) and to hedge against further changes (e.g., new activities to perform or unexpected variations in resource capacity). On the one hand, the paper focuses on specific heuristic algorithms for synthesis of s, starting from a pre-existing schedule (hence the name Solve-and-Robustify). Different extensions of a technique called chaining, which progressively introduces temporal flexibility into the representation of the solution, are introduced and evaluated. These extensions follow from the fact that in multi-capacitated resource settings more than one can be derived from a specific fixed-times solution via chaining, and carry out a search for the most robust alternative. On the other hand, an additional analysis is performed to investigate the performance gain possible by further broadening the search process to consider multiple initial seed solutions. A detailed experimental analysis using state-of-the-art rcpsp/max  benchmarks is carried out to demonstrate the performance advantage of these more sophisticated solve and robustify procedures, corroborating prior results obtained on smaller problems and also indicating how this leverage increases as problem size is increased.  相似文献   

13.
Chvátal-Gomory cuts are among the most well-known classes of cutting planes for general integer linear programs (ILPs). In case the constraint multipliers are either 0 or , such cuts are known as -cuts. It has been proven by Caprara and Fischetti (Math. Program. 74:221–235, 1996) that separation of -cuts is -hard. In this paper, we study ways to separate -cuts effectively in practice. We propose a range of preprocessing rules to reduce the size of the separation problem. The core of the preprocessing builds a Gaussian elimination-like procedure. To separate the most violated -cut, we formulate the (reduced) problem as integer linear program. Some simple heuristic separation routines complete the algorithmic framework. Computational experiments on benchmark instances show that the combination of preprocessing with exact and/or heuristic separation is a very vital idea to generate strong generic cutting planes for integer linear programs and to reduce the overall computation times of state-of-the-art ILP-solvers.  相似文献   

14.
We consider the problem of approximately integrating a Lipschitz function f (with a known Lipschitz constant) over an interval. The goal is to achieve an additive error of at most ε using as few samples of f as possible. We use the adaptive framework: on all problem instances an adaptive algorithm should perform almost as well as the best possible algorithm tuned for the particular problem instance. We distinguish between and , the performances of the best possible deterministic and randomized algorithms, respectively. We give a deterministic algorithm that uses samples and show that an asymptotically better algorithm is impossible. However, any deterministic algorithm requires samples on some problem instance. By combining a deterministic adaptive algorithm and Monte Carlo sampling with variance reduction, we give an algorithm that uses at most samples. We also show that any algorithm requires samples in expectation on some problem instance (f,ε), which proves that our algorithm is optimal.  相似文献   

15.
We propose the e-model for leaf languages which complements the known balanced and unbalanced concepts. Inspired by the neutral behavior of rejecting paths of NP machines, we allow transducers to output empty words. The paper explains several advantages of the new model. A central aspect is that it allows us to prove strong gap theorems: For any class that is definable in the e-model, either or . For the existing models, gap theorems, where they exist at all, only identify gaps for the definability by regular languages. We prove gaps for the general case, i.e., for the definability by arbitrary languages. We obtain such general gaps for NP, coNP, 1NP, and co1NP. For the regular case we prove further gap theorems for Σ2P, Π2P, and Δ2P. These are the first gap theorems for Δ2P. This work is related to former work by Bovet, Crescenzi, and Silvestri, Vereshchagin, Hertrampf et al., Burtschick and Vollmer, and Borchert et al. An extended abstract of this paper was presented at the 31st International Symposium on Mathematical Foundations of Computer Science (MFCS 2006).S. Travers supported by the Konrad-Adenauer-Stiftung.  相似文献   

16.
We study the communication primitives of broadcasting (one-to-all communication) and gossiping (all-to-all communication) in known topology radio networks, i.e., where for each primitive the schedule of transmissions is precomputed based on full knowledge about the size and the topology of the network. We show that gossiping can be completed in time units in any radio network of size n, diameter D, and maximum degree Δ=Ω(log n). This is an almost optimal schedule in the sense that there exists a radio network topology, specifically a Δ-regular tree, in which the radio gossiping cannot be completed in less than units of time. Moreover, we show a schedule for the broadcast task. Both our transmission schemes significantly improve upon the currently best known schedules by Gąsieniec, Peleg, and Xin (Proceedings of the 24th Annual ACM SIGACT-SIGOPS PODC, pp. 129–137, 2005), i.e., a O(D+Δlog n) time schedule for gossiping and a D+O(log 3 n) time schedule for broadcast. Our broadcasting schedule also improves, for large D, a very recent O(D+log 2 n) time broadcasting schedule by Kowalski and Pelc. A preliminary version of this paper appeared in the proceedings of ISAAC’06. F. Cicalese supported by the Sofja Kovalevskaja Award 2004 of the Alexander von Humboldt Stiftung. F. Manne and Q. Xin supported by the Research Council of Norway through the SPECTRUM project.  相似文献   

17.
The parameterized node multiway cut problem is for a given graph to find a separator of size bounded by k whose removal separates a collection of terminal sets in the graph. In this paper, we develop an O(k4 k n 3) time algorithm for this problem, significantly improving the previous algorithm of time for the problem. Our result gives the first polynomial time algorithm for the minimum node multiway cut problem when the separator size is bounded by O(log n). A preliminary version of this paper was presented at The 10th Workshop on Algorithms and Data Structures (WADS 2007). This work was supported in part by the National Science Foundation under the Grants CCR-0311590 and CCF-0430683.  相似文献   

18.
The typechecking problem for transformations of relational data into tree data is the following: given a relational-to-XML transformation P, and an XML type d, decide whether for every database instance the result of the transformation P on satisfies d. TreeQL programs with projection-free conjunctive queries (see Alon et al. in ACM Trans. Comput. Log. 4(3):315–354, 2003) are considered as transformations and DTDs with arbitrary regular expressions as XML types. A non-elementary upper bound for the typechecking problem was already given by Alon et al. (ACM Trans. Comput. Log. 4(3):315–354, 2003) (although in a more general setting, where equality and negation in projection-free conjunctive queries and additional universal integrity constraints are allowed). In this paper we show that the typechecking problem is coNEXPTIME-complete. As an intermediate step we consider the following problem, which can be formulated independently of XML notions. Given a set of triples of the form (φ,k,j), where φ is a projection-free conjunctive query and k,j are natural numbers, decide whether there exists a database such that, for each triple (φ,k,j) in the set, there exists a natural number α, such that there are exactly k+j*α tuples satisfying the query φ in . Our main technical contribution consists of a NEXPTIME algorithm for the last problem. Partially supported by Polish Ministry of Science and Higher Education research project N206 022 31/3660, 2006/2009. This paper is an extended version of 20, where the coNEXPTIME upper bound was shown.  相似文献   

19.
20.
In this work, we consider the problem of solving , , where b (k+1) = f(x (k)). We show that when A is a full matrix and , where depends on the specific software and hardware setup, it is faster to solve for by explicitly evaluating the inverse matrix A −1 rather than through the LU decomposition of A. We also show that the forward error is comparable in both methods, regardless of the condition number of A.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号