首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we give the first construction of a pseudorandom generator, with seed length O(log n), for CC0[p], the class of constant-depth circuits with unbounded fan-in MOD p gates, for some prime p. More accurately, the seed length of our generator is O(log n) for any constant error ${\epsilon > 0}$ . In fact, we obtain our generator by fooling distributions generated by low-degree polynomials, over ${\mathbb{F}_p}$ , when evaluated on the Boolean cube. This result significantly extends previous constructions that either required a long seed (Luby et al. 1993) or could only fool the distribution generated by linear functions over ${\mathbb{F}_p}$ , when evaluated on the Boolean cube (Lovett et al. 2009; Meka & Zuckerman 2009). En route of constructing our PRG, we prove two structural results for low-degree polynomials over finite fields that can be of independent interest.
  1. Let f be an n-variate degree d polynomial over ${\mathbb{F}_p}$ . Then, for every ${\epsilon > 0}$ , there exists a subset ${S \subset [n]}$ , whose size depends only on d and ${\epsilon}$ , such that ${\sum_{\alpha \in \mathbb{F}_p^n: \alpha \ne 0, \alpha_S=0}|\hat{f}(\alpha)|^2 \leq \epsilon}$ . Namely, there is a constant size subset S such that the total weight of the nonzero Fourier coefficients that do not involve any variable from S is small.
  2. Let f be an n-variate degree d polynomial over ${\mathbb{F}_p}$ . If the distribution of f when applied to uniform zero–one bits is ${\epsilon}$ -far (in statistical distance) from its distribution when applied to biased bits, then for every ${\delta > 0}$ , f can be approximated over zero–one bits, up to error δ, by a function of a small number (depending only on ${\epsilon,\delta}$ and d) of lower degree polynomials.
  相似文献   

2.
The algebraic immunity of a Boolean function is a parameter that characterizes the possibility to bound this function from above or below by a nonconstant Boolean function of a low algebraic degree. We obtain lower bounds on the algebraic immunity for a class of functions expressed through the inversion operation in the field GF(2 n ), as well as for larger classes of functions defined by their trace forms. In particular, for n ≥ 5, the algebraic immunity of the function Tr n (x ?1) has a lower bound ?2√n + 4? ? 4, which is close enough to the previously obtained upper bound ?√n? + ?n/?√n?? ? 2. We obtain a polynomial algorithm which, give a trace form of a Boolean function f, computes generating sets of functions of degree ≤ d for the following pair of spaces. Each function of the first (linear) space bounds f from below, and each function of the second (affine) space bounds f from above. Moreover, at the output of the algorithm, each function of a generating set is represented both as its trace form and as a polynomial of Boolean variables.  相似文献   

3.
In this paper we investigate the expected complexityE(C) of distributive (“bucket”) sorting algorithms on a sampleX 1, ...,X n drawn from a densityf onR 1. Assuming constant time bucket membership determination and assuming the use of an average timeg(n) algorithm for subsequent sorting within each bucket (whereg is convex,g(n)/n↑∞,g(n)/n 2 is nonincreasing andg is independent off), the following is shown:
  1. Iff has compact support, then ∫g(f(x))dx<∞ if and only ifE(C)=0(n).
  2. Iff does not have compact support, then \(E(C)/n\xrightarrow{n}\infty \) .
No additional restrictions are put onf.  相似文献   

4.
We provide optimal parallel solutions to several link-distance problems set in trapezoided rectilinear polygons. All our main parallel algorithms are deterministic and designed to run on the exclusive read exclusive write parallel random access machine (EREW PRAM). LetP be a trapezoided rectilinear simple polygon withn vertices. InO(logn) time usingO(n/logn) processors we can optimally compute:
  1. Minimum réctilinear link paths, or shortest paths in theL 1 metric from any point inP to all vertices ofP.
  2. Minimum rectilinear link paths from any segment insideP to all vertices ofP.
  3. The rectilinear window (histogram) partition ofP.
  4. Both covering radii and vertex intervals for any diagonal ofP.
  5. A data structure to support rectilinear link-distance queries between any two points inP (queries can be answered optimally inO(logn) time by uniprocessor).
Our solution to 5 is based on a new linear-time sequential algorithm for this problem which is also provided here. This improves on the previously best-known sequential algorithm for this problem which usedO(n logn) time and space.5 We develop techniques for solving link-distance problems in parallel which are expected to find applications in the design of other parallel computational geometry algorithms. We employ these parallel techniques, for example, to compute (on a CREW PRAM) optimally the link diameter, the link center, and the central diagonal of a rectilinear polygon.  相似文献   

5.
We define a class ofn-ary relations on strings called the regular prefix relations, and give four alternative characterizations of this class:
  1. the relations recognized by a new type of automaton, the prefix automata,
  2. the relations recognized by tree automata specialized to relations on strings,
  3. the relations between strings definable in the second order theory ofk successors,
  4. the smallest class containing the regular sets and the prefix relation, and closed under the Boolean operations, Cartesian product, projection, explicit transformation, and concatenation with Cartesian products of regular sets.
We give concrete examples of regular prefix relations, and a pumping argument for prefix automata. An application of these results to the study of inductive inference of regular sets is described.  相似文献   

6.
Trial and error     
A pac-learning algorithm isd-space bounded, if it stores at mostd examples from the sample at any time. We characterize thed-space learnable concept classes. For this purpose we introduce the compression parameter of a concept classb and design our Trial and Error Learning Algorithm. We show: b isd-space learnable if and only if the compression parameter ofb is at mostd. This learning algorithm does not produce a hypothesis consistent with the whole sample as previous approaches e.g. by Floyd, who presents consistent space bounded learning algorithms, but has to restrict herself to very special concept classes. On the other hand our algorithm needs large samples; the compression parameter appears as exponent in the sample size. We present several examples of polynomial time space bounded learnable concept classes:
  • - all intersection closed concept classes with finite VC-dimension.
  • - convexn-gons in ?2.
  • - halfspaces in ?n.
  • - unions of triangles in ?2.
  • We further relate the compression parameter to the VC-dimension, and discuss variants of this parameter.  相似文献   

    7.
    We report progress on the NL versus UL problem.
  • We show that counting the number of s-t paths in graphs where the number of s-v paths for any v is bounded by a polynomial can be done in FUL: the unambiguous log-space function class. Several new upper bounds follow from this including ${{{ReachFewL} \subseteq {UL}}}$ and ${{{LFew} \subseteq {UL}^{FewL}}}$
  • We investigate the complexity of min-uniqueness—a central notion in studying the NL versus UL problem. In this regard we revisit the class OptL[log n] and introduce UOptL[log n], an unambiguous version of OptL[log n]. We investigate the relation between UOptL[log n] and other existing complexity classes.
  • We consider the unambiguous hierarchies over UL and UOptL[log n]. We show that the hierarchy over UOptL[log n] collapses. This implies that ${{{ULH} \subseteq {L}^{{promiseUL}}}}$ thus collapsing the UL hierarchy.
  • We show that the reachability problem over graphs embedded on 3 pages is complete for NL. This contrasts with the reachability problem over graphs embedded on 2 pages, which is log-space equivalent to the reachability problem in planar graphs and hence is in UL.
  •   相似文献   

    8.
    We define the complexity of a computational problem given by a relation using the model of computation trees together with the Ostrowski complexity measure. Natural examples from linear algebra are:
  • KER n : Compute a basis of the kernel for a givenn×n-matrix,
  • OGB n : Find an invertible matrix that transforms a given symmetricn×n-matrix (quadratic form) into diagonal form,
  • SPR n : Find a sparse representation of a givenn×n-matrix.
  •   相似文献   

    9.
    In this paper, for a finitely generated monoid M, we tackle the following three questions:
    1. Is it possible to give a characterization of rational subsets of M which have polynomial growth?
    2. What is the structure of the counting function of rational sets which have polynomial growth?
    3. Is it true that every rational subset of M has either exponential growth or it has polynomial growth? Can one decide for a given rational set which of the two options holds?
    We give a positive answer to all the previous questions in the case that M is a direct product of free monoids. Some of the proved results also extend to trace monoid.  相似文献   

    10.
    We give a self-reduction for the Circuit Evaluation problem (CircEval) and prove the following consequences.
    1. Amplifying size–depth lower bounds. If CircEval has Boolean circuits of n k size and n 1?δ depth for some k and δ, then for every ${\epsilon > 0}$ , there is a δ′ > 0 such that CircEval has circuits of ${n^{1 + \epsilon}}$ size and ${n^{1- \delta^{\prime}}}$ depth. Moreover, the resulting circuits require only ${\tilde{O}(n^{\epsilon})}$ bits of non-uniformity to construct. As a consequence, strong enough depth lower bounds for Circuit Evaluation imply a full separation of P and NC (even with a weak size lower bound).
    2. Lower bounds for quantified Boolean formulas. Let c, d > 1 and e < 1 satisfy c < (1 ? e d )/d. Either the problem of recognizing valid quantified Boolean formulas (QBF) is not solvable in TIME[n c ], or the Circuit Evaluation problem cannot be solved with circuits of n d size and n e depth. This implies unconditional polynomial-time uniform circuit lower bounds for solving QBF. We also prove that QBF does not have n c -time uniform NC circuits, for all c < 2.
      相似文献   

    11.
    Drawing planar graphs using the canonical ordering   总被引:4,自引:0,他引:4  
    G. Kant 《Algorithmica》1996,16(1):4-32
    We introduce a new method to optimize the required area, minimum angle, and number of bends of planar graph drawings on a grid. The main tool is a new type of ordering on the vertices and faces of triconnected planar graphs. Using this method linear-time-and-space algorithms can be designed for many graph-drawing problems. Our main results are as follows:
  • Every triconnected planar graphG admits a planar convex grid drawing with straight lines on a (2n?4)×(n?2) grid, wheren is the number of vertices.
  • Every triconnected planar graph with maximum degree 4 admits a planar orthogonal grid drawing on ann×n grid with at most [3n/2]+4 bends, and ifn>6, then every edge has at most two bends.
  • Every planar graph with maximum degree 3 admits a planar orthogonal grid drawing with at most [n/2]+1 bends on an [n/2]×[n/2] grid.
  • Every triconnected planar graphG admits a planar polyline grid drawing on a (2n?6)×(3n?9) grid with minimum angle larger than 2/d radians and at most 5n?15 bends, withd the maximum degree.
  • These results give in some cases considerable improvements over previous results, and give new bounds in other cases. Several other results, e.g., concerning visibility representations, are included.  相似文献   

    12.
    An ongoing line of research has shown super-polynomial lower bounds for uniform and slightly-non-uniform small-depth threshold and arithmetic circuits (Allender, in Chicago J. Theor. Comput. Sci. 1999(7), 1999; Koiran and Perifel, in Proceedings of the 24th Annual IEEE Conference on Computational Complexity (CCC 2009), pp. 35–40, 2009; Jansen and Santhanam, in Proceedings of the 38th International Colloquium on Automata, Languages and Programming (ICALP 2011), I, pp. 724–735, 2011). We give a unified framework that captures and improves each of the previous results. Our main results are that Permanent does not have threshold circuits of the following kinds.
    1. Depth O(1), n o(1) bits of non-uniformity, and size n O(1).
    2. Depth O(1), polylog(n) bits of non-uniformity, and size s(n) such that for all constants c the c-fold composition of s, s (c)(n), is less than 2 n .
    3. Depth o(loglogn), polylog(n) bits of non-uniformity, and size n O(1).
    (1) strengthens a result of Jansen and Santhanam (Jansen and Santhanam, in Proceedings of the 38th International Colloquium on Automata, Languages and Programming (ICALP 2011), I, pp. 724–735, 2011), who obtained similar parameters but for arithmetic circuits of constant depth rather than Boolean threshold circuits. (2) and (3) strengthen results of Allender (Allender, in Chicago J. Theor. Comput. Sci. 1999(7), 1999) and Koiran and Perifel (Koiran and Perifel, in Proceedings of the 24th Annual IEEE Conference on Computational Complexity (CCC 2009), pp. 35–40, 2009), respectively, who obtained results with similar parameters but for completely uniform circuits. Our main technical contribution is to simplify and unify earlier proofs in this area, and adapt the proofs to handle some amount of non-uniformity. We also develop a notion of circuits with a small amount of non-uniformity that naturally interpolates between fully uniform and fully non-uniform circuits. We use this notion, which we term weak uniformity, rather than the earlier and essentially equivalent notion of succinctness used by Jansen and Santhanam because the notion of weak uniformity more fully and easily interpolates between full uniformity and non-uniformity of circuits.  相似文献   

    13.
    Hardness amplification results show that for every Boolean function f, there exists a Boolean function Amp(f) such that if every size s circuit computes f correctly on at most a 1 ? δ fraction of inputs, then every size s′ circuit computes Amp(f) correctly on at most a ${1/2+\epsilon}$ fraction of inputs. All hardness amplification results in the literature suffer from “size loss” meaning that ${s' \leq \epsilon \cdot s}$ . We show that proofs using “non-uniform reductions” must suffer from such size loss. A reduction is an oracle circuit ${R^{(\cdot)}}$ which given oracle access to any function D that computes Amp(f) correctly on a ${1/2+\epsilon}$ fraction of inputs, computes f correctly on a 1 ? δ fraction of inputs. A non-uniform reduction is allowed to also receive a short advice string that may depend on both f and D. The well-known connection between hardness amplification and list-decodable error-correcting codes implies that reductions showing hardness amplification cannot be uniform for ${\epsilon < 1/4}$ . We show that every non-uniform reduction must make at least ${\Omega(1/\epsilon)}$ queries to its oracle, which implies size loss. Our result is the first lower bound that applies to non-uniform reductions that are adaptive, whereas previous bounds by Shaltiel & Viola (SICOMP 2010) applied only to non-adaptive reductions. We also prove similar bounds for a stronger notion of “function-specific” reductions in which the reduction is only required to work for a specific function f.  相似文献   

    14.
    We present three new approximation algorithms with improved constant ratios for selecting n points in n disks such that the minimum pairwise distance among the points is maximized.
    1. A very simple O(nlog?n)-time algorithm with ratio 0.511 for disjoint unit disks.
    2. An LP-based algorithm with ratio 0.707 for disjoint disks of arbitrary radii that uses a linear number of variables and constraints, and runs in polynomial time.
    3. A hybrid algorithm with ratio either 0.4487 or 0.4674 for (not necessarily disjoint) unit disks that uses an algorithm of Cabello in combination with either the simple O(nlog?n)-time algorithm or the LP-based algorithm.
    The LP algorithm can be extended for disjoint balls of arbitrary radii in ? d , for any (fixed) dimension d, while preserving the features of the planar algorithm. The algorithm introduces a novel technique which combines linear programming and projections for approximating Euclidean distances. The previous best approximation ratio for dispersion in disjoint disks, even when all disks have the same radius, was 1/2. Our results give a positive answer to an open question raised by Cabello, who asked whether the ratio 1/2 could be improved.  相似文献   

    15.
    We prove two main results on how arbitrary linear threshold functions ${f(x) = {\rm sign}(w \cdot x - \theta)}$ over the n-dimensional Boolean hypercube can be approximated by simple threshold functions. Our first result shows that every n-variable threshold function f is ${\epsilon}$ -close to a threshold function depending only on ${{\rm Inf}(f)^2 \cdot {\rm poly}(1/\epsilon)}$ many variables, where ${{\rm Inf}(f)}$ denotes the total influence or average sensitivity of f. This is an exponential sharpening of Friedgut’s well-known theorem (Friedgut in Combinatorica 18(1):474–483, 1998), which states that every Boolean function f is ${\epsilon}$ -close to a function depending only on ${2^{O({\rm Inf}(f)/\epsilon)}}$ many variables, for the case of threshold functions. We complement this upper bound by showing that ${\Omega({\rm Inf}(f)^2 + 1/\epsilon^2)}$ many variables are required for ${\epsilon}$ -approximating threshold functions. Our second result is a proof that every n-variable threshold function is ${\epsilon}$ -close to a threshold function with integer weights at most ${{\rm poly}(n) \cdot 2^{\tilde{O}(1/\epsilon^{2/3})}.}$ This is an improvement, in the dependence on the error parameter ${\epsilon}$ , on an earlier result of Servedio (Comput Complex 16(2):180–209, 2007) which gave a ${{\rm poly}(n) \cdot 2^{\tilde{O}(1/\epsilon^{2})}}$ bound. Our improvement is obtained via a new proof technique that uses strong anti-concentration bounds from probability theory. The new technique also gives a simple and modular proof of the original result of Servedio (Comput Complex 16(2):180–209, 2007) and extends to give low-weight approximators for threshold functions under a range of probability distributions other than the uniform distribution.  相似文献   

    16.
    In this paper, we study two interprocedural program-analysis problems—interprocedural slicing and interprocedural dataflow analysis— and present the following results:
  • ? Interprocedural slicing is log-space complete forP.
  • ? The problem of obtaining “meet-over-all-valid-paths” solutions to interprocedural versions of distributive dataflow-analysis problems isP-hard.
  • ? Obtaining “meet-over-all-valid-paths” solutions to interprocedural versions of distributive dataflow-analysis problems that involve finite sets of dataflow facts (such as the classical “gen/kill” problems) is log-space complete forP.
  • These results provide evidence that there do not exist fast (N?-class) parallel algorithms for interprocedural slicing and precise interprocedural dataflow analysis (unlessP =N?). That is, it is unlikely that there are algorithms for interprocedural slicing and precise interprocedural dataflow analysis for which the number of processors is bounded by a polynomial in the size of the input, and whose running time is bounded by a polynomial in the logarithm of the size of the input. This suggests that there are limitations on the ability to use parallelism to overcome compiler bottlenecks due to expensive interprocedural-analysis computations.  相似文献   

    17.
    Every Boolean function on n variables can be expressed as a unique multivariate polynomial modulo p for every prime p. In this work, we study how the degree of a function in one characteristic affects its complexity in other characteristics. We establish the following general principle: functions with low degree modulo p must have high complexity in every other characteristic q. More precisely, we show the following results about Boolean functions f : {0, 1}n → {0, 1} which depend on all n variables, and distinct primes pq:
      o If f has degree o(log n) modulo p, then it must have degree Ω(n1−o(1)) modulo q. Thus a Boolean function has degree o(log n) in at most one characteristic. This result is essentially tight as there exist functions that have degree log n in every characteristic.  相似文献   

    18.
    19.
    In this paper we construct an interpolatory quadrature formula of the type $$\mathop {\rlap{--} \smallint }\limits_{ - 1}^1 \frac{{f'(x)}}{{y - x}}dx \approx \sum\limits_{i = 1}^n {w_{ni} (y)f(x_{ni} )} ,$$ wheref(x)=(1?x)α(1+x)β f o(x), α, β>0, and {x ni} are then zeros of then-th degree Chebyshev polynomial of the first kind,T n (x). We also give a convergence result and examine the behavior of the quantity \( \sum\limits_{i = 1}^n {|w_{ni} (y)|} \) asn→∞.  相似文献   

    20.
    Stable semantics for disjunctive programs   总被引:1,自引:0,他引:1  
    We introduce the stable model semantics fordisjunctive logic programs and deductive databases, which generalizes the stable model semantics, defined earlier for normal (i.e., non-disjunctive) programs. Depending on whether only total (2-valued) or all partial (3-valued) models are used we obtain thedisjunctive stable semantics or thepartial disjunctive stable semantics, respectively. The proposed semantics are shown to have the following properties:
  • ? For normal programs, the disjunctive (respectively, partial disjunctive) stable semantics coincides with thestable (respectively,partial stable) semantics.
  • ? For normal programs, the partial disjunctive stable semantics also coincides with thewell-founded semantics.
  • ? For locally stratified disjunctive programs both (total and partial) disjunctive stable semantics coincide with theperfect model semantics.
  • ? The partial disjunctive stable semantics can be generalized to the class ofall disjunctive logic programs.
  • ? Both (total and partial) disjunctive stable semantics can be naturally extended to a broader class of disjunctive programs that permit the use ofclassical negation.
  • ? After translation of the programP into a suitable autoepistemic theory \( \hat P \) the disjunctive (respectively, partial disjunctive) stable semantics ofP coincides with the autoepistemic (respectively, 3-valued autoepistemic) semantics of \( \hat P \) .
  •   相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号