首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 406 毫秒
1.
Zeev Nutov 《Algorithmica》2014,70(2):340-364
We consider Degree Constrained Survivable Network problems. For the directed Degree Constrained k -Edge-Outconnected Subgraph problem, we slightly improve the best known approximation ratio, by a simple proof. Our main contribution is giving a framework to handle node-connectivity degree constrained problems with the iterative rounding method. In particular, for the degree constrained versions of the Element-Connectivity Survivable Network problem on undirected graphs, and of the k -Outconnected Subgraph problem on both directed and undirected graphs, our algorithm computes a solution J of cost O(logk) times the optimal, with degrees O(2 k )?b(v). Similar result are obtained for the k -Connected Subgraph problem. The latter improves on the only degree approximation O(klogn)?b(v) in O(n k ) time on undirected graphs by Feder, Motwani, and Zhu.  相似文献   

2.
Zeev Nutov 《Algorithmica》2012,63(1-2):398-410
We consider the (undirected) Node Connectivity Augmentation (NCA) problem: given a graph J=(V,E J ) and connectivity requirements $\{r(u,v): u,v \in V\}$ , find a minimum size set I of new edges (any edge is allowed) such that the graph JI contains r(u,v) internally-disjoint uv-paths, for all u,vV. In Rooted NCA there is sV such that r(u,v)>0 implies u=s or v=s. For large values of k=max? u,vV r(u,v), NCA is at least as hard to approximate as Label-Cover and thus it is unlikely to admit an approximation ratio polylogarithmic in k. Rooted NCA is at least as hard to approximate as Hitting-Set. The previously best approximation ratios for the problem were O(kln?n) for NCA and O(ln?n) for Rooted NCA. In this paper we give an approximation algorithm with ratios O(kln?2 k) for NCA and O(ln?2 k) for Rooted NCA. This is the first approximation algorithm with ratio independent of?n, and thus is a constant for any fixed k. Our algorithm is based on the following new structural result which is of independent interest. If $\mathcal{D}$ is a set of node pairs in a graph?J, then the maximum degree in the hypergraph formed by the inclusion minimal tight sets separating at least one pair in $\mathcal{D}$ is O(? 2), where ? is the maximum connectivity in J of a pair in $\mathcal{D}$ .  相似文献   

3.
We explore relationships between circuit complexity, the complexity of generating circuits, and algorithms for analyzing circuits. Our results can be divided into two parts:
  1. Lower bounds against medium-uniform circuits. Informally, a circuit class is “medium uniform” if it can be generated by an algorithmic process that is somewhat complex (stronger than LOGTIME) but not infeasible. Using a new kind of indirect diagonalization argument, we prove several new unconditional lower bounds against medium-uniform circuit classes, including: ? For all k, P is not contained in P-uniform SIZE(n k ). That is, for all k, there is a language \({L_k \in {\textsf P}}\) that does not have O(n k )-size circuits constructible in polynomial time. This improves Kannan’s lower bound from 1982 that NP is not in P-uniform SIZE(n k ) for any fixed k. ? For all k, NP is not in \({{\textsf P}^{\textsf NP}_{||}-{\textsf {uniform SIZE}}(n^k)}\) .This also improves Kannan’s theorem, but in a different way: the uniformity condition on the circuits is stronger than that on the language itself. ? For all k, LOGSPACE does not have LOGSPACE-uniform branching programs of size n k .
  2. Eliminating non-uniformity and (non-uniform) circuit lower bounds. We complement these results by showing how to convert any potential simulation of LOGTIME-uniform NC 1 in ACC 0/poly or TC 0/poly into a medium-uniform simulation using small advice. This lemma can be used to simplify the proof that faster SAT algorithms imply NEXP circuit lower bounds and leads to the following new connection: ? Consider the following task: given a TC 0 circuit C of n O(1) size, output yes when C is unsatisfiable, and output no when C has at least 2 n-2 satisfying assignments. (Behavior on other inputs can be arbitrary.) Clearly, this problem can be solved efficiently using randomness. If this problem can be solved deterministically in 2 n-ω(log n) time, then \({{\textsf{NEXP}} \not \subset {\textsf{TC}}^0/{\rm poly}}\) .
Another application is to derandomize randomized TC 0 simulations of NC 1 on almost all inputs: ?Suppose \({{\textsf{NC}}^1 \subseteq {\textsf{BPTC}}^0}\) . Then, for every ε > 0 and every language L in NC 1, there is a LOGTIME?uniform TC 0 circuit family of polynomial size recognizing a language L′ such that L and L′ differ on at most \({2^{n^{\epsilon}}}\) inputs of length n, for all n.  相似文献   

4.
We strengthen a previously known connection between the size complexity of two-way finite automata ( ) and the space complexity of Turing machines (tms). Specifically, we prove that
  • every s-state has a poly(s)-state that agrees with it on all inputs of length ≤s if and only if NL?L/poly, and
  • every s-state has a poly(s)-state that agrees with it on all inputs of length ≤2 s if and only if NLL?LL/polylog.
  • Here, and are the deterministic and nondeterministic , NL and L/poly are the standard classes of languages recognizable in logarithmic space by nondeterministic tms and by deterministic tms with access to polynomially long advice, and NLL and LL/polylog are the corresponding complexity classes for space O(loglogn) and advice length poly(logn). Our arguments strengthen and extend an old theorem by Berman and Lingas and can be used to obtain variants of the above statements for other modes of computation or other combinations of bounds for the input length, the space usage, and the length of advice.  相似文献   

    5.
    We report progress on the NL versus UL problem.
  • We show that counting the number of s-t paths in graphs where the number of s-v paths for any v is bounded by a polynomial can be done in FUL: the unambiguous log-space function class. Several new upper bounds follow from this including ${{{ReachFewL} \subseteq {UL}}}$ and ${{{LFew} \subseteq {UL}^{FewL}}}$
  • We investigate the complexity of min-uniqueness—a central notion in studying the NL versus UL problem. In this regard we revisit the class OptL[log n] and introduce UOptL[log n], an unambiguous version of OptL[log n]. We investigate the relation between UOptL[log n] and other existing complexity classes.
  • We consider the unambiguous hierarchies over UL and UOptL[log n]. We show that the hierarchy over UOptL[log n] collapses. This implies that ${{{ULH} \subseteq {L}^{{promiseUL}}}}$ thus collapsing the UL hierarchy.
  • We show that the reachability problem over graphs embedded on 3 pages is complete for NL. This contrasts with the reachability problem over graphs embedded on 2 pages, which is log-space equivalent to the reachability problem in planar graphs and hence is in UL.
  •   相似文献   

    6.
    We consider transactional memory contention management in the context of balanced workloads, where if a transaction is writing, the number of write operations it performs is a constant fraction of its total reads and writes. We explore the theoretical performance boundaries of contention management in balanced workloads from the worst-case perspective by presenting and analyzing two new polynomial time contention management algorithms. We analyze the performance of a contention management algorithm by comparison with an optimal offline contention management algorithm to provide a competitive ratio. The first algorithm Clairvoyant is $O(\sqrt{s})$ -competitive, where s is the number of shared resources. This algorithm depends on explicitly knowing the conflict graph at each time step of execution. The second algorithm Non-Clairvoyant is $O(\sqrt{s} \cdot \log n)$ -competitive, with high probability, which is only a O(log?n) factor worse, but does not require knowledge of the conflict graph, where n is the number of transactions. Both of these algorithms are greedy. We also prove that the performance of Clairvoyant is close to optimal, since there is no polynomial time contention management algorithm for the balanced transaction scheduling problem that is better than $O((\sqrt{s})^{1-\varepsilon})$ -competitive for any constant ε>0, unless NP?ZPP. To our knowledge, these results are significant improvements over the best previously known O(s) competitive ratio bound.  相似文献   

    7.
    The set of permutations of ??n??={1,??,n} in one-line notation is ??(n). The shorthand encoding of a 1?a n ????(n) is a 1?a n?1. A shorthand universal cycle for permutations (SP-cycle) is a circular string of length n! whose substrings of length n?1 are the shorthand encodings of ??(n). When an SP-cycle is decoded, the order of ??(n) is a Gray code in which successive permutations differ by the prefix-rotation ?? i =(1 2 ? i) for i??{n?1,n}. Thus, SP-cycles can be represented by n! bits. We investigate SP-cycles with maximum and minimum ??weight?? (number of ?? n?1s in the Gray code). An SP-cycle n a n b?n z is ??periodic?? if its ??sub-permutations?? a,b,??,z equal ??(n?1). We prove that periodic min-weight SP-cycles correspond to spanning trees of the (n?1)-permutohedron. We provide two constructions: B(n) and C(n). In B(n) the spanning trees use ??half-hunts?? from bell-ringing, and in C(n) the sub-permutations use cool-lex order by Williams (SODA, 987?C996, 2009). Algorithmic results are: (1)?memoryless decoding of B(n) and C(n), (2)?O((n?1)!)-time generation of B(n) and C(n) using sub-permutations, (3)?loopless generation of B(n)??s binary representation n bits at a time, and (4)?O(n+??(n))-time ranking of B(n)??s permutations where ??(n) is the cost of computing a permutation??s inversion vector. Results (1)?C(4) improve on those for the previous SP-cycle construction D(n) by Ruskey and Williams (ACM Trans. Algorithms 6(3):Art.?45, 2010), which we characterize here using ??recycling??.  相似文献   

    8.
    In the k-Feedback Arc/Vertex Set problem we are given a directed graph D and a positive integer k and the objective is to check whether it is possible to delete at most k arcs/vertices from D to make it acyclic. Dom et al. (J. Discrete Algorithm 8(1):76–86, 2010) initiated a study of the Feedback Arc Set problem on bipartite tournaments (k-FASBT) in the realm of parameterized complexity. They showed that k-FASBT can be solved in time O(3.373 k n 6) on bipartite tournaments having n vertices. However, until now there was no known polynomial sized problem kernel for k-FASBT. In this paper we obtain a cubic vertex kernel for k-FASBT. This completes the kernelization picture for the Feedback Arc/Vertex Set problem on tournaments and bipartite tournaments, as for all other problems polynomial kernels were known before. We obtain our kernel using a non-trivial application of “independent modules” which could be of independent interest.  相似文献   

    9.
    Vertex deletion and edge deletion problems play a central role in parameterized complexity. Examples include classical problems like Feedback Vertex Set, Odd Cycle Transversal, and Chordal Deletion. The study of analogous edge contraction problems has so far been left largely unexplored from a parameterized perspective. We consider two basic problems of this type: Tree Contraction and Path Contraction. These two problems take as input an undirected graph G on n vertices and an integer k, and the task is to determine whether we can obtain a tree or a path, respectively, by a sequence of at most k edge contractions in G. For Tree Contraction, we present a randomized 4 k ? n O(1) time polynomial-space algorithm, as well as a deterministic 4.98 k ? n O(1) time algorithm, based on a variant of the color coding technique of Alon, Yuster and Zwick. We also present a deterministic 2 k+o(k)+n O(1) time algorithm for Path Contraction. Furthermore, we show that Path Contraction has a kernel with at most 5k+3 vertices, while Tree Contraction does not have a polynomial kernel unless NP ? coNP/poly. We find the latter result surprising because of the connection between Tree Contraction and Feedback Vertex Set, which is known to have a kernel with 4k 2 vertices.  相似文献   

    10.
    An ongoing line of research has shown super-polynomial lower bounds for uniform and slightly-non-uniform small-depth threshold and arithmetic circuits (Allender, in Chicago J. Theor. Comput. Sci. 1999(7), 1999; Koiran and Perifel, in Proceedings of the 24th Annual IEEE Conference on Computational Complexity (CCC 2009), pp. 35–40, 2009; Jansen and Santhanam, in Proceedings of the 38th International Colloquium on Automata, Languages and Programming (ICALP 2011), I, pp. 724–735, 2011). We give a unified framework that captures and improves each of the previous results. Our main results are that Permanent does not have threshold circuits of the following kinds.
    1. Depth O(1), n o(1) bits of non-uniformity, and size n O(1).
    2. Depth O(1), polylog(n) bits of non-uniformity, and size s(n) such that for all constants c the c-fold composition of s, s (c)(n), is less than 2 n .
    3. Depth o(loglogn), polylog(n) bits of non-uniformity, and size n O(1).
    (1) strengthens a result of Jansen and Santhanam (Jansen and Santhanam, in Proceedings of the 38th International Colloquium on Automata, Languages and Programming (ICALP 2011), I, pp. 724–735, 2011), who obtained similar parameters but for arithmetic circuits of constant depth rather than Boolean threshold circuits. (2) and (3) strengthen results of Allender (Allender, in Chicago J. Theor. Comput. Sci. 1999(7), 1999) and Koiran and Perifel (Koiran and Perifel, in Proceedings of the 24th Annual IEEE Conference on Computational Complexity (CCC 2009), pp. 35–40, 2009), respectively, who obtained results with similar parameters but for completely uniform circuits. Our main technical contribution is to simplify and unify earlier proofs in this area, and adapt the proofs to handle some amount of non-uniformity. We also develop a notion of circuits with a small amount of non-uniformity that naturally interpolates between fully uniform and fully non-uniform circuits. We use this notion, which we term weak uniformity, rather than the earlier and essentially equivalent notion of succinctness used by Jansen and Santhanam because the notion of weak uniformity more fully and easily interpolates between full uniformity and non-uniformity of circuits.  相似文献   

    11.
    We use algorithmic tools for graphs of small treewidth to address questions in complexity theory. For our main construction, we prove that multiplicatively disjoint arithmetic circuits of size n O(1) and treewidth k can be simulated by bounded fan-in arithmetic formulas of depth O(k 2logn). From this we derive an analogous statement for syntactically multilinear arithmetic circuits, which strengthens the central theorem of M. Mahajan and B.V.R. Rao (Proc. 33rd International Symposium on Mathematical Foundations of Computer Science, vol. 5162, pp. 455–466, 2008). We show our main construction has the following three applications:
    • Bounded width arithmetic circuits of size n O(1) can be balanced to depth O(logn), provided chains of iterated multiplication in the circuit are of length O(1).
    • Boolean bounded fan-in circuits of size n O(1) and treewidth k can be simulated by bounded fan-in formulas of depth O(k 2logn). This strengthens in the non-uniform setting the known inclusion that SC0?NC1.
    • We demonstrate treewidth restricted cases of Directed-Reachability and Circuit Value Problem that can be solved in LogDCFL.
    We also give a construction showing, for both arithmetic and Boolean circuits, that any circuit of size n O(1) and treewidth O(log i n) can be simulated by a circuit of width O(log i+1 n) and size n c , where c=O(1), if i=0, and c=O(loglogn) otherwise.  相似文献   

    12.
    By terms-allowed-in-formulas capacity, Artemov’s Logic of Proofs LP Artemov includes self-referential formulas of the form t:?(t) that play a crucial role in the realization of modal logic S4 in LP, and in the Brouwer–Heyting–Kolmogorov semantics of intuitionistic logic via LP. In an earlier work appeared in the Proceedings of CSR 2010 the author defined prehistoric loop in a sequent calculus of S4, and verified its necessity to self-referentiality in S4?LP realization. In this extended version we generalize results there to T and K4, two modal logics smaller than S4 that yet call for self-referentiality in their realizations into corresponding justification logics JT and J4.  相似文献   

    13.
    Given a set of pointsV in the plane, the Euclidean bottleneck matching problem is to match each point with some other point such that the longest Euclidean distance between matched points, resulting from this matching, is minimized. To solve this problem, we definek-relative neighborhood graphs, (kRNG) which are derived from Toussaint's relative neighborhood graphs (RNG). Two points are calledk-relative neighbors if and only if there are less thank points ofV which are closer to both of the two points than the two points are to each other. AkRNG is an undirected graph (V,E r k ) whereE r k is the set of pairs of points ofV which arek-relative neighbors. We prove that there exists an optimal solution of the Euclidean bottleneck matching problem which is a subset ofE r 17 . We also prove that ¦E r k ¦ < 18kn wheren is the number of points in setV. Our algorithm would construct a 17RNG first. This takesO(n 2) time. We then use Gabow and Tarjan's bottleneck maximum cardinality matching algorithm for general graphs whose time-complexity isO((n logn)0.5 m), wherem is the number of edges in the graph, to solve the bottleneck maximum cardinality matching problem in the 17RNG. This takesO(n 1.5 log0.5 n) time. The total time-complexity of our algorithm for the Euclidean bottleneck matching problem isO(n 2 +n 1.5 log0.5 n).  相似文献   

    14.
    We study the Cutwidth problem, where the input is a graph G, and the objective is find a linear layout of the vertices that minimizes the maximum number of edges intersected by any vertical line inserted between two consecutive vertices. We give an algorithm for Cutwidth with running time O(2 k n O(1)). Here k is the size of a minimum vertex cover of the input graph G, and n is the number of vertices in G. Our algorithm gives an O(2 n/2 n O(1)) time algorithm for Cutwidth on bipartite graphs as a corollary. This is the first non-trivial exact exponential time algorithm for Cutwidth on a graph class where the problem remains NP-complete. Additionally, we show that Cutwidth parameterized by the size of the minimum vertex cover of the input graph does not admit a polynomial kernel unless NP?coNP/poly. Our kernelization lower bound contrasts with the recent results of Bodlaender et al. (ICALP, Springer, Berlin, 2011; SWAT, Springer, Berlin, 2012) that both Treewidth and Pathwidth parameterized by vertex cover do admit polynomial kernels.  相似文献   

    15.
    We give a self-reduction for the Circuit Evaluation problem (CircEval) and prove the following consequences.
    1. Amplifying size–depth lower bounds. If CircEval has Boolean circuits of n k size and n 1?δ depth for some k and δ, then for every ${\epsilon > 0}$ , there is a δ′ > 0 such that CircEval has circuits of ${n^{1 + \epsilon}}$ size and ${n^{1- \delta^{\prime}}}$ depth. Moreover, the resulting circuits require only ${\tilde{O}(n^{\epsilon})}$ bits of non-uniformity to construct. As a consequence, strong enough depth lower bounds for Circuit Evaluation imply a full separation of P and NC (even with a weak size lower bound).
    2. Lower bounds for quantified Boolean formulas. Let c, d > 1 and e < 1 satisfy c < (1 ? e d )/d. Either the problem of recognizing valid quantified Boolean formulas (QBF) is not solvable in TIME[n c ], or the Circuit Evaluation problem cannot be solved with circuits of n d size and n e depth. This implies unconditional polynomial-time uniform circuit lower bounds for solving QBF. We also prove that QBF does not have n c -time uniform NC circuits, for all c < 2.
      相似文献   

    16.
    We study a simple technique, originally presented by Herlihy (ACM Trans. Program. Lang. Syst. 15(5):745–770, 1993), for executing concurrently, in a wait-free manner, blocks of code that have been programmed for sequential execution and require significant synchronization in order to be performed in parallel. We first present an implementation of this technique, called Sim, which employs a collect object. We describe a simple implementation of a collect object from a single shared object that supports atomic Add (or XOR) in addition to read; this implementation has step complexity O(1). By plugging in to Sim this implementation, Sim exhibits constant step complexity as well. This allows us to derive lower bounds on the step complexity of implementations of several shared objects, like Add, XOR, collect, and snapshot objects, from LL/SC objects. We then present a practical version of Sim, called PSim, which is implemented in a real shared-memory machine. From a theoretical perspective, PSim has worse step complexity than Sim, its theoretical analog; in practice though, we experimentally show that PSim is highly-efficient: it outperforms several state-of-the-art lock-based and lock-free synchronization techniques, and this given that it is wait-free, i.e. that it satisfies a stronger progress condition than all the algorithms that it outperforms. We have used PSim to get highly-efficient wait-free implementations of stacks and queues.  相似文献   

    17.
    We introduce two new natural decision problems, denoted as ? RATIONAL NASH and ? IRRATIONAL NASH, pertinent to the rationality and irrationality, respectively, of Nash equilibria for (finite) strategic games. These problems ask, given a strategic game, whether or not it admits (i) a rational Nash equilibrium where all probabilities are rational numbers, and (ii) an irrational Nash equilibrium where at least one probability is irrational, respectively. We are interested here in the complexities of ? RATIONAL NASH and ? IRRATIONAL NASH. Towards this end, we study two other decision problems, denoted as NASH-EQUIVALENCE and NASH-REDUCTION, pertinent to some mutual properties of the sets of Nash equilibria of two given strategic games with the same number of players. The problem NASH-EQUIVALENCE asks whether or not the two sets of Nash equilibria coincide; we identify a restriction of its complementary problem that witnesses ? RATIONAL NASH. The problem NASH-REDUCTION asks whether or not there is a so called Nash reduction: a suitable map between corresponding strategy sets of players that yields a Nash equilibrium of the former game from a Nash equilibrium of the latter game; we identify a restriction of NASH-REDUCTION that witnesses ? IRRATIONAL NASH. As our main result, we provide two distinct reductions to simultaneously show that (i) NASH-EQUIVALENCE is co- $\mathcal{NP}$ -hard and ? RATIONAL NASH is $\mathcal{NP}$ -hard, and (ii) NASH-REDUCTION and ? IRRATIONAL NASH are both $\mathcal{NP}$ -hard, respectively. The reductions significantly extend techniques previously employed by Conitzer and Sandholm (Proceedings of the 18th Joint Conference on Artificial Intelligence, pp. 765–771, 2003; Games Econ. Behav. 63(2), 621–641, 2008).  相似文献   

    18.
    In this paper, we show how to exploit the structure of some automata-based construction to efficiently solve the LTL synthesis problem. We focus on a construction proposed in Schewe and Finkbeiner that reduces the synthesis problem to a safety game, which can then be solved by computing the solution of the classical fixpoint equation νX.SafeCPre(X), where CPre(X) are the controllable predecessors of X. We have shown in previous works that the sets computed during the fixpoint algorithm can be equipped with a partial order that allows one to represent them very compactly, by the antichain of their maximal elements. However the computation of CPre(X) cannot be done in polynomial time when X is represented by an antichain (unless P = NP). This motivates the use of SAT solvers to compute CPre(X). Also, we show that the CPre operator can be replaced by a weaker operator CPre crit where the adversary is restricted to play a subset of critical signals. We show that the fixpoints of the two operators coincide, and so, instead of applying iteratively CPre, we can apply iteratively CPre crit. In practice, this leads to important improvements on previous LTL synthesis methods. The reduction to SAT problems and the weakening of the CPre operator into CPre crit and their performance evaluations are new.  相似文献   

    19.
    In the uniform circuit model of computation, the width of a boolean circuit exactly characterizes the “space” complexity of the computed function. Looking for a similar relationship in Valiant’s algebraic model of computation, we propose width of an arithmetic circuit as a possible measure of space. In the uniform setting, we show that our definition coincides with that of VPSPACE at polynomial width. We introduce the class VL as an algebraic variant of deterministic log-space L; VL is a subclass of VP. Further, to define algebraic variants of non-deterministic space-bounded classes, we introduce the notion of “read-once” certificates for arithmetic circuits. We show that polynomial-size algebraic branching programs (an algebraic analog of NL) can be expressed as read-once exponential sums over polynomials in ${{\sf VL}, {\it i.e.}\quad{\sf VBP} \in \Sigma^R \cdot {\sf VL}}$ . Thus, read-once exponential sums can be viewed as a reasonable way of capturing space-bounded non-determinism. We also show that Σ R ·VBPVBP, i.e. VBPs are stable under read-once exponential sums. Though the best upper bound we have for Σ R ·VL itself is VNP, we can obtain better upper bounds for width-bounded multiplicatively disjoint (md-) circuits. Without the width restriction, md- arithmetic circuits are known to capture all of VP. We show that read-once exponential sums over md- constant-width arithmetic circuits are within VP and that read-once exponential sums over md- polylog-width arithmetic circuits are within VQP. We also show that exponential sums of a skew formula cannot represent the determinant polynomial.  相似文献   

    20.
    Given a DNF formula f on n variables, the two natural size measures are the number of terms or size s(f) and the maximum width of a term w(f). It is folklore that small DNF formulas can be made narrow: if a formula has m terms, it can be ${\epsilon}$ -approximated by a formula with width ${{\rm log}(m/{\epsilon})}$ . We prove a converse, showing that narrow formulas can be sparsified. More precisely, any width w DNF irrespective of its size can be ${\epsilon}$ -approximated by a width w DNF with at most ${(w\, {\rm log}(1/{\epsilon}))^{O(w)}}$ terms. We combine our sparsification result with the work of Luby & Velickovic (1991, Algorithmica 16(4/5):415–433, 1996) to give a faster deterministic algorithm for approximately counting the number of satisfying solutions to a DNF. Given a formula on n variables with poly(n) terms, we give a deterministic ${n^{\tilde{O}({\rm log}\, {\rm log} (n))}}$ time algorithm that computes an additive ${\epsilon}$ approximation to the fraction of satisfying assignments of f for ${\epsilon = 1/{\rm poly}({\rm log}\, n)}$ . The previous best result due to Luby and Velickovic from nearly two decades ago had a run time of ${n^{{\rm exp}(O(\sqrt{{\rm log}\, {\rm log} n}))}}$ (Luby & Velickovic 1991, in Algorithmica 16(4/5):415–433, 1996).  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号