首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We study the approximation of min set cover combining ideas and results from polynomial approximation and from exact computation (with non-trivial worst case complexity upper bounds) for NP-hard problems. We design approximation algorithms for min set cover achieving ratios that cannot be achieved in polynomial time (unless problems in NP could be solved by slightly super-polynomial algorithms) with worst-case complexity much lower (though super-polynomial) than those of an exact computation.  相似文献   

2.
We consider the resource-bounded measure of polynomial-time learnable subclasses of polynomial-size circuits. We show that if EXP ≠ MA, then every PAC-learnable subclass of P/poly has EXP-measure zero. We introduce a nonuniformly computable variant of resource-bounded measure and show that, for every fixed polynomial q , any polynomial-time learnable subclass of circuits of size q has measure zero with respect to P/poly. We relate our results to the question of whether the class of Boolean circuits is polynomial-time learnable. Received July 15, 1998, and in final form September 9, 1999.  相似文献   

3.
We introduce a model of computation based on the use of write-once memory. Write-once memory has the property that bits may be set but not reset. Our model consists of a RAM with a small amount of regular memory (such as logarithmic orn α for α<1, wheren is the size of the problem) and a polynomial amount of write-once memory. Bounds are given on the time required to simulate on write-once memory algorithms which originally run on a RAM with a polynomial amount of regular memory. We attempt to characterize algorithms that can be simulated on our write-once memory model with very little slow-down. A persistent computation is one in which, at all times, the memory state of the computation at any previous point in time can be reconstructed. We show that any data structure or computation implemented on this write-once memory model can be made persistent without sacrificing much in the way of running time or space. The space requirements of algorithms running on the write-once model are studied. We show that general simulations of algorithms originally running on a RAM with regular memory by algorithms running on our write-once memory model require space proportional to the number of steps simulated. In order to study the space complexity further, we define an analogue of the pebbling game, called the pebble-sticker game. A sticker is different from a pebble in that it cannot be removed once placed on a node of the computation graph. As placing pebbles correspond to writes to regular memory, placing stickers correspond to writes to the write-once memory. Bounds are shown on pebble-sticker tradeoffs required to evaluate trees and planar graphs. Finally, we define the complexity class WO-PSPACE as the class of problems which can be solved with a polynomial amount of write-once memory, and show that it is equal toP. The research of S. Irani was supported by NSF Grant No. DCF-85-13926, and a Tandem Corporation Fellowship. R. Rubinfeld's research was supported by NSF Grant No. CCR 88-13632 and an IBM Graduate Fellowship.  相似文献   

4.
We give the first algorithm that is both query-efficient and time-efficient for testing whether an unknown function f:{0,1} n →{−1,1} is an s-sparse GF(2) polynomial versus ε-far from every such polynomial. Our algorithm makes poly(s,1/ε) black-box queries to f and runs in time n⋅poly(s,1/ε). The only previous algorithm for this testing problem (Diakonikolas et al. in Proceedings of the 48th Annual Symposium on Foundations of Computer Science, FOCS, pp. 549–558, 2007) used poly(s,1/ε) queries, but had running time exponential in s and super-polynomial in 1/ε.  相似文献   

5.
The author studies the complexity of the problem of allocating modules to processes in a distributed system to minimize total communication and execution costs. He shows that unless P=NP, there can be no polynomial-time ϵ-approximate algorithm for the problem, nor can there exist a local search algorithm that requires polynomial time per iteration and yields an optimum assignment. Both results hold even if the communication graph is planar and bipartite. On the positive side, it is shown that if the communication graph is a partial k-tree or an almost-tree with parameter k, the module allocation problem can be solved in polynomial time  相似文献   

6.
Boolean functions that have constant degree polynomial representation over a fixed finite ring form a natural and strict subclass of the complexity class ACC0. They are also precisely the functions computable efficiently by programs over fixed and finite nilpotent groups. This class is not known to be learnable in any reasonable learning model. In this paper, we provide a deterministic polynomial time algorithm for learning Boolean functions represented by polynomials of constant degree over arbitrary finite rings from membership queries, with the additional constraint that each variable in the target polynomial appears in a constant number of monomials. Our algorithm extends to superconstant but low degree polynomials and still runs in quasipolynomial time.  相似文献   

7.
8.
We define and study the complexity of robust polynomials for Boolean functions and the related fault-tolerant quantum decision trees, where input bits are perturbed by noise. We compare several different possible definitions. Our main results are: *For every n-bit Boolean function f there is an n-variate polynomial p of degree O(n) that robustly approximates it, in the sense that p(x) remains close to f(x) if we slightly vary each of the n inputs of the polynomial. *There is an O(n)-query quantum algorithm that robustly recovers n noisy input bits. Hence every n-bit function can be quantum computed with O(n) queries in the presence of noise. This contrasts with the classical model of Feige et al., where functions such as parity need Θ(n log n) queries. We give several extensions and applications of these results.  相似文献   

9.
Recursive analysis, the theory of computation of functions on real numbers, has been studied from various aspects. We investigate the computational complexity of real functions using the methods of recursive function theory. Partial recursive real functions are defined and their domains are characterized as the recursively open sets. We define the time complexity of recursive real continuous functions and show that the time complexity and the modulus of uniform continuity of a function are closely related. We study the complexity of the roots and the differentiability of polynomial time computable real functions. In particular, a polynomial time computable real function may have a root of arbitrarily high complexity and may be nowhere differentiable. The concepts of the space complexity and nondeterministic computation are used to study the complexity of the integrals and the maximum values of real functions. These problems are shown to be related to the “P=?NP” and the “P=?PSPACE” questions.  相似文献   

10.
Time-Space Tradeoffs for Undirected Graph Traversal by Graph Automata   总被引:1,自引:0,他引:1  
We investigate time-space tradeoffs for traversing undirected graphs, using a variety of structured models that are all variants of Cook and Rackoff's “Jumping Automata for Graphs.” Our strongest tradeoff is a quadratic lower bound on the product of time and space for graph traversal. For example, achieving linear time requires linear space, implying that depth-first search is optimal. Since our bound in fact applies to nondeterministic algorithms fornonconnectivity, it also implies that closure under complementation of nondeterministic space-bounded complexity classes is achieved only at the expense of increased time. To demonstrate that these structured models are realistic, we also investigate their power. In addition to admitting well known algorithms such as depth-first search and random walk, we show that one simple variant of this model is nearly as powerful as a Turing machine. Specifically, for general undirected graph problems, it can simulate a Turing machine with only a constant factor increase in space and a polynomial factor increase in time.  相似文献   

11.
The combinational complexity of a system of partial derivatives in the basis of linear functions is established for a Boolean function of n variables that is realized by a Zhegalkin polynomial. An algorithm whose complexity equals 3n – 2n modulo 2 additions is proposed for computation of all partial derivatives of such a function from the coefficients of its Zhegalkin polynomial.  相似文献   

12.
'1IntroductionManydecisionproblemsareNPcompleteandtheirassociatedoptimizationproblemsareNPequivalent.ButtheseoptimizationproblemshaveverydifferentaPprokimabilities.Forexample,knapsackproblemhasapolynomial-timeapproki-mationscheme,whileTSPisnotconstall-aPprotimable,unlessP=NP.WhatmakesdifferenceamongNPoptimizationproblems(NPOPs)?In[3],KolaitisandThakurinvestigatedtheapprokimabilityofNPOPsbymeansofrepresentingNPOPswithfirst-orderselltences-TheyprovedthatifP/NP,thenitisanundecidable…  相似文献   

13.
We consider the problem of testing whether a given system of equations over a fixed finite semigroup S has a solution. For the case where S is a monoid, we prove that the problem is computable in polynomial time when S is commutative and is the union of its subgroups but is NP-complete otherwise. When S is a monoid or a regular semigroup, we obtain similar dichotomies for the restricted version of the problem where no variable occurs on the right-hand side of each equation. We stress connections between these problems and constraint satisfaction problems. In particular, for any finite domain D and any finite set of relations Γ over D, we construct a finite semigroup SΓ such that CSP(Γ) is polynomial-time equivalent to the satifiability problem for systems of equations over SΓ.  相似文献   

14.
Karchmer, Raz, and Wigderson (1995) discuss the circuit depth complexity of n-bit Boolean functions constructed by composing up to d = log n/log log n levels of k = log n-bit Boolean functions. Any such function is in AC1 . They conjecture that circuit depth is additive under composition, which would imply that any (bounded fan-in) circuit for this problem requires depth. This would separate AC1 from NC1. They recommend using the communication game characterization of circuit depth. In order to develop techniques for using communication complexity to prove circuit depth lower bounds, they suggest an intermediate communication complexity problem which they call the Universal Composition Relation. We give an almost optimal lower bound of dkO(d 2(k log k)1/2) for this problem. In addition, we present a proof, directly in terms of communication complexity, that there is a function on k bits requiring circuit depth. Although this fact can be easily established using a counting argument, we hope that the ideas in our proof will be incorporated more easily into subsequent arguments which use communication complexity to prove circuit depth bounds. Received: July 30, 1999.  相似文献   

15.
B. Fu  R. Beigel 《Algorithmica》1999,24(2):87-95
The number of molecular strands used by a molecular algorithm is an important measure of the algorithm's complexity. This measure is also called the volume used by the algorithm. We prove that three important polynomial-time models of molecular computation with bounded volume are equivalent to models of polynomial-time Turing machine computation with bounded nondeterminism. Without any assumption, we show that the Split operation does not increase the power of polynomial-time molecular computation. Assuming a plausible separation between Turing machine complexity classes, the Amplify operation does increase the power of polynomial-time molecular computation. Received September 28, 1997; revised March 24, 1998.  相似文献   

16.
We exhibit a polynomial time computable plane curve Γ that has finite length, does not intersect itself, and is smooth except at one endpoint, but has the following property. For every computable parametrization f of Γ and every positive integer m, there is some positive-length subcurve of Γ that f retraces at least m times. In contrast, every computable curve of finite length that does not intersect itself has a constant-speed (hence non-retracing) parametrization that is computable relative to the halting problem.  相似文献   

17.
林杰  覃飙  覃雄派 《计算机应用》2018,38(7):1893-1897
针对数据库中不等式连接查询的因果关系问题,引入并实现了resilience计算,并且为了降低其在路径类型不等式连接查询中计算的时间复杂度,提出了求解resilience的动态规划(DPResi)算法。首先,根据路径类型不等式连接查询的特点及最大流最小割原理,实现了多项式时间复杂度的Min-Cut算法;然后通过将带有不等式布尔连接查询语句的溯源表达式编辑为溯源图,进而将resilience求解问题转换为溯源图中最短距离的计算问题,并结合溯源图的包含关系与最优子结构性质,运用动态规划的思想实现了线性时间复杂度的DPResi算法。在TPC-H数据集上进行了大量实验,实验结果表明,与Min-Cut算法相比,DPResi算法极大地提高了resilience计算的效率,并具有较好的扩展性。  相似文献   

18.
Derandomizing Polynomial Identity Tests Means Proving Circuit Lower Bounds   总被引:2,自引:2,他引:0  
We show that derandomizing Polynomial Identity Testing is essentially equivalent to proving arithmetic circuit lower bounds for NEXP. More precisely, we prove that if one can test in polynomial time (or even nondeterministic subexponential time, infinitely often) whether a given arithmetic circuit over integers computes an identically zero polynomial, then either (i) (ii) Permanent is not computable by polynomial-size arithmetic circuits. We also prove a (partial) converse: If Permanent requires superpolynomial-size arithmetic circuits, then one can test in subexponential time whether a given arithmetic circuit of polynomially bounded degree computes an identically zero polynomial.Since Polynomial Identity Testing is a coRP problem, we obtain the following corollary: If then NEXP is not computable by polynomial-size arithmetic circuits. Thus establishing that RP = coRP or BPP = P would require proving superpolynomial lower bounds for Boolean or arithmetic circuits. We also show that any derandomization of RNC would yield new circuit lower bounds for a language in NEXP.We also prove unconditionally that NEXPRP does not have polynomial-size Boolean or arithmetic circuits. Finally, we show that if both BPP = P and low-degree testing is in P; here low-degree testing is the problem of checking whether a given Boolean circuit computes a function that is close to some low-degree polynomial over a finite field.  相似文献   

19.
20.
We give a rigorous deterministic polynomial time algorithm for the modular inversion hidden number problem introduced by D. Boneh, S. Halevi and N.A. Howgrave-Graham in 2001. For our algorithm, we need to be given about 2/3 of the bits of the output, which matches one of the heuristic algorithms of D. Boneh, S. Halevi and N.A. Howgrave-Graham and answers one of their open questions. However their more efficient algorithm that requires only 1/3 of the bits of the output still remains heuristic.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号