首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Consider the NP-hard problem of, given a simple graph?G, to find a series-parallel subgraph of?G with the maximum number of edges. The algorithm that, given a connected graph?G, outputs a spanning tree of?G, is a $\frac{1}{2}$ -approximation. Indeed, if n is the number of vertices in G, any spanning tree in G has?n?1 edges and any series-parallel graph on?n vertices has at most?2n?3 edges. We present a $\frac{7}{12}$ -approximation for this problem and results showing the limits of our approach.  相似文献   

2.
Given a range space $(X,\mathcal{R})$ , where $\mathcal{R}\subset2^{X}$ , the hitting set problem is to find a smallest-cardinality subset H?X that intersects each set in $\mathcal{R}$ . We present near-linear-time approximation algorithms for the hitting set problem in the following geometric settings: (i)? $\mathcal{R}$ is a set of planar regions with small union complexity. (ii)? $\mathcal{R}$ is a set of axis-parallel d-dimensional boxes in ? d . In both cases X is either the entire ? d , or a finite set of points in ? d . The approximation factors yielded by the algorithm are small; they are either the same as, or within very small factors off the best factors known to be computable in polynomial time.  相似文献   

3.
L. Roditty 《Algorithmica》2012,62(3-4):1073-1087
In this paper we present an algorithm for maintaining a spanner over a dynamic set of points in constant doubling dimension metric spaces. For a set S of points in ? d , a t-spanner is a sparse graph on the points of S such that there is a path in the spanner between any pair of points whose total length is at most t times the distance between the points. We present the first fully dynamic algorithm for maintaining a spanner whose update time depends solely on the number of points in S. In particular, we show how to maintain a (1+ε)-spanner with O(n/ε d ) edges, where points can be inserted to S in an amortized update time of O(log?n) and deleted from S in an amortized update time of $\tilde{O}(n^{1/3})$ . As a by-product of our techniques we obtain a simple incremental algorithm for constructing a (1+ε)-spanner with O(n/ε d ) edges in constant doubling dimension metric spaces whose running time is O(nlog?n).  相似文献   

4.
We provide a smoothed analysis of Hoare’s find algorithm, and we revisit the smoothed analysis of quicksort. Hoare’s find algorithm—often called quickselect or one-sided quicksort—is an easy-to-implement algorithm for finding the k-th smallest element of a sequence. While the worst-case number of comparisons that Hoare’s find needs is Θ(n 2), the average-case number is Θ(n). We analyze what happens between these two extremes by providing a smoothed analysis. In the first perturbation model, an adversary specifies a sequence of n numbers of [0,1], and then, to each number of the sequence, we add a random number drawn independently from the interval [0,d]. We prove that Hoare’s find needs $\Theta(\frac{n}{d+1} \sqrt{n/d} + n)$ comparisons in expectation if the adversary may also specify the target element (even after seeing the perturbed sequence) and slightly fewer comparisons for finding the median. In the second perturbation model, each element is marked with a probability of p, and then a random permutation is applied to the marked elements. We prove that the expected number of comparisons to find the median is $\Omega((1-p) \frac{n}{p} \log n)$ . Finally, we provide lower bounds for the smoothed number of comparisons of quicksort and Hoare’s find for the median-of-three pivot rule, which usually yields faster algorithms than always selecting the first element: The pivot is the median of the first, middle, and last element of the sequence. We show that median-of-three does not yield a significant improvement over the classic rule.  相似文献   

5.
A mode of a multiset S is an element aS of maximum multiplicity; that is, a occurs at least as frequently as any other element in S. Given an array A[1:n] of n elements, we consider a basic problem: constructing a static data structure that efficiently answers range mode queries on A. Each query consists of an input pair of indices (i,j) for which a mode of A[i:j] must be returned. The best previous data structure with linear space, by Krizanc, Morin, and Smid (Proceedings of the International Symposium on Algorithms and Computation (ISAAC), pp. 517–526, 2003), requires \(\varTheta (\sqrt{n}\log\log n)\) query time in the worst case. We improve their result and present an O(n)-space data structure that supports range mode queries in \(O(\sqrt{n/\log n})\) worst-case time. In the external memory model, we give a linear-space data structure that requires \(O(\sqrt{n/B})\) I/Os per query, where B denotes the block size. Furthermore, we present strong evidence that a query time significantly below \(\sqrt{n}\) cannot be achieved by purely combinatorial techniques; we show that boolean matrix multiplication of two \(\sqrt{n} \times \sqrt{n}\) matrices reduces to n range mode queries in an array of size O(n). Additionally, we give linear-space data structures for the dynamic problem (queries and updates in near O(n 3/4) time), for orthogonal range mode in d dimensions (queries in near O(n 1?1/2d ) time) and for half-space range mode in d dimensions (queries in \(O(n^{1-1/d^{2}})\) time). Finally, we complement our dynamic data structure with a reduction from the multiphase problem, again supporting that we cannot hope for much more efficient data structures.  相似文献   

6.
We study a multi-player one-round game termed Stackelberg Network Pricing Game, in which a leader can set prices for a subset of m priceable edges in a graph. The other edges have a fixed cost. Based on the leader’s decision one or more followers optimize a polynomial-time solvable combinatorial minimization problem and choose a minimum cost solution satisfying their requirements based on the fixed costs and the leader’s prices. The leader receives as revenue the total amount of prices paid by the followers for priceable edges in their solutions. Our model extends several known pricing problems, including single-minded and unit-demand pricing, as well as Stackelberg pricing for certain follower problems like shortest path or minimum spanning tree. Our first main result is a tight analysis of a single-price algorithm for the single follower game, which provides a (1+ε)log?m-approximation. This can be extended to provide a (1+ε)(log?k+log?m)-approximation for the general problem and k followers. The problem is also shown to be hard to approximate within $\mathcal{O}(\log^{\varepsilon}k + \log^{\varepsilon}m)$ for some ε>0. If followers have demands, the single-price algorithm provides an $\mathcal{O}(m^{2})$ -approximation, and the problem is hard to approximate within $\mathcal{O}(m^{\epsilon})$ for some ε>0. Our second main result is a polynomial time algorithm for revenue maximization in the special case of Stackelberg bipartite vertex-cover, which is based on non-trivial max-flow and LP-duality techniques. This approach can be extended to provide constant-factor approximations for any constant number of followers.  相似文献   

7.
The factorization algorithm of Pollard generates a sequence in ? n by $$x_0 : = 2;x_{i + 1} : = x_i^2 - 1(\bmod n),i = 1,2,3,...$$ wheren denotes the integer to be factored. The algorithm finds an factorp ofn within \(0\left( {\sqrt p } \right)\) macrosteps (=multiplications/divisions in ? n ) on average. An empirical analysis of the Pollard algorithm using modified sequences $$x_{i + 1} = b \cdot x_i^\alpha + c(\bmod n),i = 1,2,...$$ withx 0,b,c,α∈? and α≥2 shows, that a factorp ofn under the assumption gcd (α,p-1)≠1 now is found within $$0\left( {\sqrt {\frac{p}{{ged(\alpha ,p - 1}}} } \right)$$ macrosteps on average.  相似文献   

8.
In this paper we provide improved approximation algorithms for the Min-Max Tree Cover and Bounded Tree Cover problems. Given a graph G=(V,E) with weights w:E→?+, a set T 1,T 2,…,T k of subtrees of G is called a tree cover of G if $V=\bigcup_{i=1}^{k} V(T_{i})$ . In the Min-Max k-tree Cover problem we are given graph G and a positive integer k and the goal is to find a tree cover with k trees, such that the weight of the largest tree in the cover is minimized. We present a 3-approximation algorithm for this improving the two different approximation algorithms presented in Arkin et al. (J. Algorithms 59:1–18, 2006) and Even et al. (Oper. Res. Lett. 32(4):309–315, 2004) with ratio 4. The problem is known to have an APX-hardness lower bound of $\frac{3}{2}$ (Xu and Wen in Oper. Res. Lett. 38:169–173, 2010). In the Bounded Tree Cover problem we are given graph G and a bound λ and the goal is to find a tree cover with minimum number of trees such that each tree has weight at most λ. We present a 2.5-approximation algorithm for this, improving the 3-approximation bound in Arkin et al. (J. Algorithms 59:1–18, 2006).  相似文献   

9.
Yuichi Yoshida  Hiro Ito 《Algorithmica》2012,62(3-4):701-712
We present an algorithm for testing the k-vertex-connectivity of graphs with the given maximum degree. The time complexity of the algorithm is independent of the number of vertices and edges of graphs. Fixed degree bound d, a graph G with n vertices and a maximum degree at most d is called ε-far from k-vertex-connectivity when at least $\frac{\epsilon dn}{2}$ edges must be added to or removed from G to obtain a k-vertex-connected graph with a maximum degree at most d. The algorithm always accepts every graph that is k-vertex-connected and rejects every graph that is ε-far from k-vertex-connectivity with a probability of at least 2/3. The algorithm runs in $O(d(\frac{c}{\epsilon d})^{k}\log\frac {1}{\epsilon d})$ time (c>1 is a constant) for (k?1)-vertex-connected graphs, and in $O(d(\frac{ck}{\epsilon d})^{k}\log\frac{k}{\epsilon d})$ time (c>1 is a constant) for general graphs. It is the first constant-time k-vertex-connectivity testing algorithm for general k≥4.  相似文献   

10.
In this paper we consider both the maximization variant Max Rep and the minimization variant Min Rep of the famous Label Cover problem. So far the best approximation ratios known for these two problems were \(O(\sqrt{n})\) and indeed some authors suggested the possibility that this ratio is the best approximation factor for these two problems. We show, in fact, that there are a O(n 1/3)-approximation algorithm for Max Rep and a O(n 1/3log?2/3 n)-approximation algorithm for Min Rep. In addition, we also exhibit a randomized reduction from Densest k-Subgraph to Max Rep, showing that any approximation factor for Max Rep implies the same factor (up to a constant) for Densest k-Subgraph.  相似文献   

11.
We explore relationships between circuit complexity, the complexity of generating circuits, and algorithms for analyzing circuits. Our results can be divided into two parts:
  1. Lower bounds against medium-uniform circuits. Informally, a circuit class is “medium uniform” if it can be generated by an algorithmic process that is somewhat complex (stronger than LOGTIME) but not infeasible. Using a new kind of indirect diagonalization argument, we prove several new unconditional lower bounds against medium-uniform circuit classes, including: ? For all k, P is not contained in P-uniform SIZE(n k ). That is, for all k, there is a language \({L_k \in {\textsf P}}\) that does not have O(n k )-size circuits constructible in polynomial time. This improves Kannan’s lower bound from 1982 that NP is not in P-uniform SIZE(n k ) for any fixed k. ? For all k, NP is not in \({{\textsf P}^{\textsf NP}_{||}-{\textsf {uniform SIZE}}(n^k)}\) .This also improves Kannan’s theorem, but in a different way: the uniformity condition on the circuits is stronger than that on the language itself. ? For all k, LOGSPACE does not have LOGSPACE-uniform branching programs of size n k .
  2. Eliminating non-uniformity and (non-uniform) circuit lower bounds. We complement these results by showing how to convert any potential simulation of LOGTIME-uniform NC 1 in ACC 0/poly or TC 0/poly into a medium-uniform simulation using small advice. This lemma can be used to simplify the proof that faster SAT algorithms imply NEXP circuit lower bounds and leads to the following new connection: ? Consider the following task: given a TC 0 circuit C of n O(1) size, output yes when C is unsatisfiable, and output no when C has at least 2 n-2 satisfying assignments. (Behavior on other inputs can be arbitrary.) Clearly, this problem can be solved efficiently using randomness. If this problem can be solved deterministically in 2 n-ω(log n) time, then \({{\textsf{NEXP}} \not \subset {\textsf{TC}}^0/{\rm poly}}\) .
Another application is to derandomize randomized TC 0 simulations of NC 1 on almost all inputs: ?Suppose \({{\textsf{NC}}^1 \subseteq {\textsf{BPTC}}^0}\) . Then, for every ε > 0 and every language L in NC 1, there is a LOGTIME?uniform TC 0 circuit family of polynomial size recognizing a language L′ such that L and L′ differ on at most \({2^{n^{\epsilon}}}\) inputs of length n, for all n.  相似文献   

12.
Zeev Nutov 《Algorithmica》2012,63(1-2):398-410
We consider the (undirected) Node Connectivity Augmentation (NCA) problem: given a graph J=(V,E J ) and connectivity requirements $\{r(u,v): u,v \in V\}$ , find a minimum size set I of new edges (any edge is allowed) such that the graph JI contains r(u,v) internally-disjoint uv-paths, for all u,vV. In Rooted NCA there is sV such that r(u,v)>0 implies u=s or v=s. For large values of k=max? u,vV r(u,v), NCA is at least as hard to approximate as Label-Cover and thus it is unlikely to admit an approximation ratio polylogarithmic in k. Rooted NCA is at least as hard to approximate as Hitting-Set. The previously best approximation ratios for the problem were O(kln?n) for NCA and O(ln?n) for Rooted NCA. In this paper we give an approximation algorithm with ratios O(kln?2 k) for NCA and O(ln?2 k) for Rooted NCA. This is the first approximation algorithm with ratio independent of?n, and thus is a constant for any fixed k. Our algorithm is based on the following new structural result which is of independent interest. If $\mathcal{D}$ is a set of node pairs in a graph?J, then the maximum degree in the hypergraph formed by the inclusion minimal tight sets separating at least one pair in $\mathcal{D}$ is O(? 2), where ? is the maximum connectivity in J of a pair in $\mathcal{D}$ .  相似文献   

13.
Approximation Algorithms for the Directed k-Tour and k-Stroll Problems   总被引:1,自引:0,他引:1  
We consider two natural generalizations of the Asymmetric Traveling Salesman problem: the k-Stroll and the k-Tour problems. The input to the k-Stroll problem is a directed n-vertex graph with nonnegative edge lengths, an integer k, as well as two special vertices s and t. The goal is to find a minimum-length s-t walk, containing at least k distinct vertices (including the endpoints s,t). The k-Tour problem can be viewed as a special case of k-Stroll, where s=t. That is, the walk is required to be a tour, containing some pre-specified vertex s. When k=n, the k-Stroll problem becomes equivalent to Asymmetric Traveling Salesman Path, and k-Tour to Asymmetric Traveling Salesman. Our main result is a polylogarithmic approximation algorithm for the k-Stroll problem. Prior to our work, only bicriteria (O(log2 k),3)-approximation algorithms have been known, producing walks whose length is bounded by 3OPT, while the number of vertices visited is Ω(k/log2 k). We also show a simple O(log2 n/loglogn)-approximation algorithm for the k-Tour problem. The best previously known approximation algorithms achieved min(O(log3 k),O(log2 n?logk/loglogn)) approximation in polynomial time, and O(log2 k) approximation in quasipolynomial time.  相似文献   

14.
Given a metric graph G=(V,E) of n vertices, i.e., a complete graph with a non-negative real edge cost function satisfying the triangle inequality, the metricity degree of G is defined as \(\beta=\max_{x,y,z\in V}\{\frac{c(x,y)}{c(x,z)+c(y,z)}\}\in[\frac{1}{2},1]\). This value is instrumental to establish the approximability of several NP-hard optimization problems definable on G, like for instance the prominent traveling salesman problem, which asks for finding a Hamiltonian cycle of G of minimum total cost. In fact, this problem can be approximated quite accurately depending on the metricity degree of G, namely by a ratio of either \(\frac{2-\beta}{3(1-\beta)}\) or \(\frac{3\beta^{2}}{3\beta^{2}-2\beta+1}\), for \(\beta<\frac{2}{3}\) or \(\beta\geq \frac{2}{3}\), respectively. Nevertheless, these approximation algorithms have O(n 3) and O(n 2.5log?1.5 n) running time, respectively, and therefore they are superlinear in the Θ(n 2) input size. Thus, since many real-world problems are modeled by graphs of huge size, their use might turn out to be unfeasible in practice, and alternative approaches requiring only O(n 2) time are sought. However, with this restriction, all the currently available approaches can only guarantee a 2-approximation ratio for the case β=1, which means a \(\frac{2\beta^{2}}{2\beta^{2}-2\beta+1}\)-approximation ratio for general β<1. In this paper, we show how to elaborate—without affecting the space and time complexity—one of these approaches, namely the classic double-MST heuristic, in order to obtain a 2β-approximate solution. This improvement is effective, since we show that the double-MST heuristic has in general a performance ratio strictly larger than 2β, and we further show that any alternative elaboration of it cannot lead to a performance ratio better than 2β?ε, for any ε>0. Our theoretical results are complemented with an extensive series of experiments, that show the practical appeal of our approach.  相似文献   

15.
We study the Cutwidth problem, where the input is a graph G, and the objective is find a linear layout of the vertices that minimizes the maximum number of edges intersected by any vertical line inserted between two consecutive vertices. We give an algorithm for Cutwidth with running time O(2 k n O(1)). Here k is the size of a minimum vertex cover of the input graph G, and n is the number of vertices in G. Our algorithm gives an O(2 n/2 n O(1)) time algorithm for Cutwidth on bipartite graphs as a corollary. This is the first non-trivial exact exponential time algorithm for Cutwidth on a graph class where the problem remains NP-complete. Additionally, we show that Cutwidth parameterized by the size of the minimum vertex cover of the input graph does not admit a polynomial kernel unless NP?coNP/poly. Our kernelization lower bound contrasts with the recent results of Bodlaender et al. (ICALP, Springer, Berlin, 2011; SWAT, Springer, Berlin, 2012) that both Treewidth and Pathwidth parameterized by vertex cover do admit polynomial kernels.  相似文献   

16.
We give a self-reduction for the Circuit Evaluation problem (CircEval) and prove the following consequences.
  1. Amplifying size–depth lower bounds. If CircEval has Boolean circuits of n k size and n 1?δ depth for some k and δ, then for every ${\epsilon > 0}$ , there is a δ′ > 0 such that CircEval has circuits of ${n^{1 + \epsilon}}$ size and ${n^{1- \delta^{\prime}}}$ depth. Moreover, the resulting circuits require only ${\tilde{O}(n^{\epsilon})}$ bits of non-uniformity to construct. As a consequence, strong enough depth lower bounds for Circuit Evaluation imply a full separation of P and NC (even with a weak size lower bound).
  2. Lower bounds for quantified Boolean formulas. Let c, d > 1 and e < 1 satisfy c < (1 ? e d )/d. Either the problem of recognizing valid quantified Boolean formulas (QBF) is not solvable in TIME[n c ], or the Circuit Evaluation problem cannot be solved with circuits of n d size and n e depth. This implies unconditional polynomial-time uniform circuit lower bounds for solving QBF. We also prove that QBF does not have n c -time uniform NC circuits, for all c < 2.
  相似文献   

17.
We show that the promise problem of distinguishing n-bit strings of relative Hamming weight \({1/2 + \Omega(1/{\rm lg}^{d-1} n)}\) from strings of weight \({1/2 - \Omega(1/{\rm \lg}^{d - 1} n)}\) can be solved by explicit, randomized (unbounded fan-in) poly(n)-size depth-d circuits with error \({\leq 1/3}\) , but cannot be solved by deterministic poly(n)-size depth-(d+1) circuits, for every \({d \geq 2}\) ; and the depth of both is tight. Our bounds match Ajtai’s simulation of randomized depth-d circuits by deterministic depth-(d + 2) circuits (Ann. Pure Appl. Logic; ’83) and provide an example where randomization buys resources. To rule out deterministic circuits, we combine Håstad’s switching lemma with an earlier depth-3 lower bound by the author (Computational Complexity 2009). To exhibit randomized circuits, we combine recent analyses by Amano (ICALP ’09) and Brody and Verbin (FOCS ’10) with derandomization. To make these circuits explicit, we construct a new, simple pseudorandom generator that fools tests \({A_1 \times A_2 \times \cdots \times A_{{\rm lg}{n}}}\) for \({A_i \subseteq [n], |A_{i}| = n/2}\) with error 1/n and seed length O(lg n), improving on the seed length \({\Omega({\rm lg}\, n\, {\rm lg}\, {\rm lg}\, n)}\) of previous constructions.  相似文献   

18.
The Travelling Salesman Problem is one of the fundamental and intensively studied problems in approximation algorithms. For more than 30 years, the best algorithm known for general metrics has been Christofides’s algorithm with an approximation factor of \(\frac{3}{2}\) , even though the so-called Held-Karp LP relaxation of the problem is conjectured to have the integrality gap of only \(\frac{4}{3}\) . Very recently, significant progress has been made for the important special case of graphic metrics, first by Oveis Gharan et al. (FOCS, 550–559, 2011), and then by Mömke and Svensson (FOCS, 560–569, 2011). In this paper, we provide an improved analysis of the approach presented in Mömke and Svensson (FOCS, 560–569, 2011) yielding a bound of \(\frac{13}{9}\) on the approximation factor, as well as a bound of \(\frac{19}{12}+\varepsilon\) for any ε>0 for a more general Travelling Salesman Path Problem in graphic metrics.  相似文献   

19.
Thorup and Zwick (J. ACM 52(1):1–24, 2005 and STOC’01) in their seminal work introduced the notion of distance oracles. Given an n-vertex weighted undirected graph with m edges, they show that for any integer k≥1 it is possible to preprocess the graph in $\tilde {O}(mn^{1/k})$ time and generate a compact data structure of size O(kn 1+1/k ). For each pair of vertices, it is then possible to retrieve an estimated distance with multiplicative stretch 2k?1 in O(k) time. For k=2 this gives an oracle of O(n 1.5) size that produces in constant time estimated distances with stretch 3. Recently, Pǎtra?cu and Roditty (In: Proc. of 51st FOCS, 2010) broke the theoretical status-quo in the field of distance oracles and obtained a distance oracle for sparse unweighted graphs of O(n 5/3) size that produces in constant time estimated distances with stretch 2. In this paper we show that it is possible to break the stretch 2 barrier at the price of non-constant query time in unweighted undirected graphs. We present a data structure that produces estimated distances with 1+ε stretch. The size of the data structure is O(nm 1?ε) and the query time is $\tilde {O}(m^{1-\varepsilon '})$ . Using it for sparse unweighted graphs we can get a data structure of size O(n 1.87) that can supply in O(n 0.87) time estimated distances with multiplicative stretch 1.75.  相似文献   

20.
We consider transactional memory contention management in the context of balanced workloads, where if a transaction is writing, the number of write operations it performs is a constant fraction of its total reads and writes. We explore the theoretical performance boundaries of contention management in balanced workloads from the worst-case perspective by presenting and analyzing two new polynomial time contention management algorithms. We analyze the performance of a contention management algorithm by comparison with an optimal offline contention management algorithm to provide a competitive ratio. The first algorithm Clairvoyant is $O(\sqrt{s})$ -competitive, where s is the number of shared resources. This algorithm depends on explicitly knowing the conflict graph at each time step of execution. The second algorithm Non-Clairvoyant is $O(\sqrt{s} \cdot \log n)$ -competitive, with high probability, which is only a O(log?n) factor worse, but does not require knowledge of the conflict graph, where n is the number of transactions. Both of these algorithms are greedy. We also prove that the performance of Clairvoyant is close to optimal, since there is no polynomial time contention management algorithm for the balanced transaction scheduling problem that is better than $O((\sqrt{s})^{1-\varepsilon})$ -competitive for any constant ε>0, unless NP?ZPP. To our knowledge, these results are significant improvements over the best previously known O(s) competitive ratio bound.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号