首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An ongoing line of research has shown super-polynomial lower bounds for uniform and slightly-non-uniform small-depth threshold and arithmetic circuits (Allender, in Chicago J. Theor. Comput. Sci. 1999(7), 1999; Koiran and Perifel, in Proceedings of the 24th Annual IEEE Conference on Computational Complexity (CCC 2009), pp. 35–40, 2009; Jansen and Santhanam, in Proceedings of the 38th International Colloquium on Automata, Languages and Programming (ICALP 2011), I, pp. 724–735, 2011). We give a unified framework that captures and improves each of the previous results. Our main results are that Permanent does not have threshold circuits of the following kinds.
  1. Depth O(1), n o(1) bits of non-uniformity, and size n O(1).
  2. Depth O(1), polylog(n) bits of non-uniformity, and size s(n) such that for all constants c the c-fold composition of s, s (c)(n), is less than 2 n .
  3. Depth o(loglogn), polylog(n) bits of non-uniformity, and size n O(1).
(1) strengthens a result of Jansen and Santhanam (Jansen and Santhanam, in Proceedings of the 38th International Colloquium on Automata, Languages and Programming (ICALP 2011), I, pp. 724–735, 2011), who obtained similar parameters but for arithmetic circuits of constant depth rather than Boolean threshold circuits. (2) and (3) strengthen results of Allender (Allender, in Chicago J. Theor. Comput. Sci. 1999(7), 1999) and Koiran and Perifel (Koiran and Perifel, in Proceedings of the 24th Annual IEEE Conference on Computational Complexity (CCC 2009), pp. 35–40, 2009), respectively, who obtained results with similar parameters but for completely uniform circuits. Our main technical contribution is to simplify and unify earlier proofs in this area, and adapt the proofs to handle some amount of non-uniformity. We also develop a notion of circuits with a small amount of non-uniformity that naturally interpolates between fully uniform and fully non-uniform circuits. We use this notion, which we term weak uniformity, rather than the earlier and essentially equivalent notion of succinctness used by Jansen and Santhanam because the notion of weak uniformity more fully and easily interpolates between full uniformity and non-uniformity of circuits.  相似文献   

2.
In this paper we study parallel algorithms for the Mesh-of-Processors architecture to solve visibility and related separability problems for sets of simple polygons in the plane. In particular, we present the following algorithms:
  • - AnO( \(\sqrt N\) time algorithm for computing on a Mesh-of-Processors of size N the visibility polygon from a point located in anN-vertex polygon, possibly with holes.
  • -O( \(\sqrt N\) ) time algorithms for computing on a Mesh-of-Processors of sizeN the set of all points on the boundary of anN-vertex polygonP which are visible in a given directiond as well as the visibility hull ofP for a given directiond.
  • - AnO( \(\sqrt N\) ) time algorithm for detecting on a Mesh-of-Processors of size 2N whether twoN-vertex polygons are separable in a given direction and anO( \(\sqrt {MN}\) ) time algorithm for detecting on a Mesh-of-Processors of sizeMN whetherM N-vertex polygons are sequentially separable in a given direction.
  • All proposed algorithms are asymptotically optimal (for the Mesh-of-Processors) with respect to time and number of processors.  相似文献   

    3.
    We consider the problem of indexing a string t of length n to report the occurrences of a query pattern p containing m characters and j wildcards. Let occ be the number of occurrences of p in t, and σ the size of the alphabet. We obtain the following results.
    • A linear space index with query time O(m+σ j loglogn+occ). This significantly improves the previously best known linear space index by Lam et al. (in Proc. 18th ISAAC, pp. 846–857, [2007]), which requires query time Θ(jn) in the worst case.
    • An index with query time O(m+j+occ) using space \(O(\sigma^{k^{2}} n \log^{k} \log n)\) , where k is the maximum number of wildcards allowed in the pattern. This is the first non-trivial bound with this query time.
    • A time-space trade-off, generalizing the index by Cole et al. (in Proc. 36th STOC, pp. 91–100, [2004]).
    We also show that these indexes can be generalized to allow variable length gaps in the pattern. Our results are obtained using a novel combination of well-known and new techniques, which could be of independent interest.  相似文献   

    4.
    A circle graph is the intersection graph of a set of chords in a circle. Keil [Discrete Appl. Math., 42(1):51–63, 1993] proved that Dominating Set, Connected Dominating Set, and Total Dominating Set are NP-complete in circle graphs. To the best of our knowledge, nothing was known about the parameterized complexity of these problems in circle graphs. In this paper we prove the following results, which contribute in this direction:
    • Dominating Set, Independent Dominating Set, Connected Dominating Set, Total Dominating Set, and Acyclic Dominating Set are W[1]-hard in circle graphs, parameterized by the size of the solution.
    • Whereas both Connected Dominating Set and Acyclic Dominating Set are W[1]-hard in circle graphs, it turns out that Connected Acyclic Dominating Set is polynomial-time solvable in circle graphs.
    • If T is a given tree, deciding whether a circle graph G has a dominating set inducing a graph isomorphic to T is NP-complete when T is in the input, and FPT when parameterized by t=|V(T)|. We prove that the FPT algorithm runs in subexponential time, namely $2^{\mathcal{O}(t \cdot\frac{\log\log t}{\log t})} \cdot n^{\mathcal{O}(1)}$ , where n=|V(G)|.
      相似文献   

    5.
    We provide optimal parallel solutions to several link-distance problems set in trapezoided rectilinear polygons. All our main parallel algorithms are deterministic and designed to run on the exclusive read exclusive write parallel random access machine (EREW PRAM). LetP be a trapezoided rectilinear simple polygon withn vertices. InO(logn) time usingO(n/logn) processors we can optimally compute:
    1. Minimum réctilinear link paths, or shortest paths in theL 1 metric from any point inP to all vertices ofP.
    2. Minimum rectilinear link paths from any segment insideP to all vertices ofP.
    3. The rectilinear window (histogram) partition ofP.
    4. Both covering radii and vertex intervals for any diagonal ofP.
    5. A data structure to support rectilinear link-distance queries between any two points inP (queries can be answered optimally inO(logn) time by uniprocessor).
    Our solution to 5 is based on a new linear-time sequential algorithm for this problem which is also provided here. This improves on the previously best-known sequential algorithm for this problem which usedO(n logn) time and space.5 We develop techniques for solving link-distance problems in parallel which are expected to find applications in the design of other parallel computational geometry algorithms. We employ these parallel techniques, for example, to compute (on a CREW PRAM) optimally the link diameter, the link center, and the central diagonal of a rectilinear polygon.  相似文献   

    6.
    7.
    We present three new approximation algorithms with improved constant ratios for selecting n points in n disks such that the minimum pairwise distance among the points is maximized.
    1. A very simple O(nlog?n)-time algorithm with ratio 0.511 for disjoint unit disks.
    2. An LP-based algorithm with ratio 0.707 for disjoint disks of arbitrary radii that uses a linear number of variables and constraints, and runs in polynomial time.
    3. A hybrid algorithm with ratio either 0.4487 or 0.4674 for (not necessarily disjoint) unit disks that uses an algorithm of Cabello in combination with either the simple O(nlog?n)-time algorithm or the LP-based algorithm.
    The LP algorithm can be extended for disjoint balls of arbitrary radii in ? d , for any (fixed) dimension d, while preserving the features of the planar algorithm. The algorithm introduces a novel technique which combines linear programming and projections for approximating Euclidean distances. The previous best approximation ratio for dispersion in disjoint disks, even when all disks have the same radius, was 1/2. Our results give a positive answer to an open question raised by Cabello, who asked whether the ratio 1/2 could be improved.  相似文献   

    8.
    A sequence of natural numbers is said to have level k, for some natural integer k, if it can be computed by a deterministic pushdown automaton of level k (Fratani and Sénizergues in Ann Pure Appl. Log. 141:363–411, 2006). We show here that the sequences of level 2 are exactly the rational formal power series over one undeterminate. More generally, we study mappings from words to words and show that the following classes coincide:
  • the mappings which are computable by deterministic pushdown automata of level 2
  • the mappings which are solution of a system of catenative recurrence equations
  • the mappings which are definable as a Lindenmayer system of type HDT0L.
  • We illustrate the usefulness of this characterization by proving three statements about formal power series, rational sets of homomorphisms and equations in words.  相似文献   

    9.
    This paper studies maximality and totality of stable functions in the category of stable bifinite domains. We present three main results as follows,o
    1. (1)every maximum-preserving function is a maximal element in the stable function spaces;
    2. (2)a maximal stable function f : DE is maximum-preserving if D is maximum-separable and E is completely separable; and
    3. (3)a stable bifinite domain D is maximum-separable if and only if for any locally distributive stable bifinite domain E, each maximal stable function f : DE is maximum-preserving.
      相似文献   

    10.
    Trial and error     
    A pac-learning algorithm isd-space bounded, if it stores at mostd examples from the sample at any time. We characterize thed-space learnable concept classes. For this purpose we introduce the compression parameter of a concept classb and design our Trial and Error Learning Algorithm. We show: b isd-space learnable if and only if the compression parameter ofb is at mostd. This learning algorithm does not produce a hypothesis consistent with the whole sample as previous approaches e.g. by Floyd, who presents consistent space bounded learning algorithms, but has to restrict herself to very special concept classes. On the other hand our algorithm needs large samples; the compression parameter appears as exponent in the sample size. We present several examples of polynomial time space bounded learnable concept classes:
  • - all intersection closed concept classes with finite VC-dimension.
  • - convexn-gons in ?2.
  • - halfspaces in ?n.
  • - unions of triangles in ?2.
  • We further relate the compression parameter to the VC-dimension, and discuss variants of this parameter.  相似文献   

    11.
    Stable semantics for disjunctive programs   总被引:1,自引:0,他引:1  
    We introduce the stable model semantics fordisjunctive logic programs and deductive databases, which generalizes the stable model semantics, defined earlier for normal (i.e., non-disjunctive) programs. Depending on whether only total (2-valued) or all partial (3-valued) models are used we obtain thedisjunctive stable semantics or thepartial disjunctive stable semantics, respectively. The proposed semantics are shown to have the following properties:
  • ? For normal programs, the disjunctive (respectively, partial disjunctive) stable semantics coincides with thestable (respectively,partial stable) semantics.
  • ? For normal programs, the partial disjunctive stable semantics also coincides with thewell-founded semantics.
  • ? For locally stratified disjunctive programs both (total and partial) disjunctive stable semantics coincide with theperfect model semantics.
  • ? The partial disjunctive stable semantics can be generalized to the class ofall disjunctive logic programs.
  • ? Both (total and partial) disjunctive stable semantics can be naturally extended to a broader class of disjunctive programs that permit the use ofclassical negation.
  • ? After translation of the programP into a suitable autoepistemic theory \( \hat P \) the disjunctive (respectively, partial disjunctive) stable semantics ofP coincides with the autoepistemic (respectively, 3-valued autoepistemic) semantics of \( \hat P \) .
  •   相似文献   

    12.
    We use algorithmic tools for graphs of small treewidth to address questions in complexity theory. For our main construction, we prove that multiplicatively disjoint arithmetic circuits of size n O(1) and treewidth k can be simulated by bounded fan-in arithmetic formulas of depth O(k 2logn). From this we derive an analogous statement for syntactically multilinear arithmetic circuits, which strengthens the central theorem of M. Mahajan and B.V.R. Rao (Proc. 33rd International Symposium on Mathematical Foundations of Computer Science, vol. 5162, pp. 455–466, 2008). We show our main construction has the following three applications:
    • Bounded width arithmetic circuits of size n O(1) can be balanced to depth O(logn), provided chains of iterated multiplication in the circuit are of length O(1).
    • Boolean bounded fan-in circuits of size n O(1) and treewidth k can be simulated by bounded fan-in formulas of depth O(k 2logn). This strengthens in the non-uniform setting the known inclusion that SC0?NC1.
    • We demonstrate treewidth restricted cases of Directed-Reachability and Circuit Value Problem that can be solved in LogDCFL.
    We also give a construction showing, for both arithmetic and Boolean circuits, that any circuit of size n O(1) and treewidth O(log i n) can be simulated by a circuit of width O(log i+1 n) and size n c , where c=O(1), if i=0, and c=O(loglogn) otherwise.  相似文献   

    13.
    We define a class ofn-ary relations on strings called the regular prefix relations, and give four alternative characterizations of this class:
    1. the relations recognized by a new type of automaton, the prefix automata,
    2. the relations recognized by tree automata specialized to relations on strings,
    3. the relations between strings definable in the second order theory ofk successors,
    4. the smallest class containing the regular sets and the prefix relation, and closed under the Boolean operations, Cartesian product, projection, explicit transformation, and concatenation with Cartesian products of regular sets.
    We give concrete examples of regular prefix relations, and a pumping argument for prefix automata. An application of these results to the study of inductive inference of regular sets is described.  相似文献   

    14.
    A. Bertoni  G. Mauri  M. Torelli 《Calcolo》1980,17(2):163-174
    This paper is intended to show that an algebraic approach can give useful suggestions to design efficient algorithms solving combinatorial problems. The problems we discusses in the paper are:
    1. Counting strings of given length generated by a regular grammar. For this problem, we give an exact algorithm whose complexity is 0 (logn) (with respect to the number of executed operations), and an approximate algorithm which however still has the same order of complexity;
    2. counting trees recognized by a tree automaton. For this problem, we give an exact algorithm of complexity 0(n) and an approximate one of complexity 0 (logn). For this approximate algorithm the relative error is shown to be 0 (1/n).
      相似文献   

    15.
    We strengthen a previously known connection between the size complexity of two-way finite automata ( ) and the space complexity of Turing machines (tms). Specifically, we prove that
  • every s-state has a poly(s)-state that agrees with it on all inputs of length ≤s if and only if NL?L/poly, and
  • every s-state has a poly(s)-state that agrees with it on all inputs of length ≤2 s if and only if NLL?LL/polylog.
  • Here, and are the deterministic and nondeterministic , NL and L/poly are the standard classes of languages recognizable in logarithmic space by nondeterministic tms and by deterministic tms with access to polynomially long advice, and NLL and LL/polylog are the corresponding complexity classes for space O(loglogn) and advice length poly(logn). Our arguments strengthen and extend an old theorem by Berman and Lingas and can be used to obtain variants of the above statements for other modes of computation or other combinations of bounds for the input length, the space usage, and the length of advice.  相似文献   

    16.
    To model association fields that underly perceptional organization (gestalt) in psychophysics we consider the problem P curve of minimizing $\int _{0}^{\ell} \sqrt{\xi^{2} +\kappa^{2}(s)} {\rm d}s $ for a planar curve having fixed initial and final positions and directions. Here κ(s) is the curvature of the curve with free total length ?. This problem comes from a model of geometry of vision due to Petitot (in J. Physiol. Paris 97:265–309, 2003; Math. Inf. Sci. Humaines 145:5–101, 1999), and Citti & Sarti (in J. Math. Imaging Vis. 24(3):307–326, 2006). In previous work we proved that the range $\mathcal{R} \subset\mathrm{SE}(2)$ of the exponential map of the underlying geometric problem formulated on SE(2) consists of precisely those end-conditions (x fin,y fin,θ fin) that can be connected by a globally minimizing geodesic starting at the origin (x in,y in,θ in)=(0,0,0). From the applied imaging point of view it is relevant to analyze the sub-Riemannian geodesics and $\mathcal{R}$ in detail. In this article we
    • show that $\mathcal{R}$ is contained in half space x≥0 and (0,y fin)≠(0,0) is reached with angle π,
    • show that the boundary $\partial\mathcal{R}$ consists of endpoints of minimizers either starting or ending in a cusp,
    • analyze and plot the cones of reachable angles θ fin per spatial endpoint (x fin,y fin),
    • relate the endings of association fields to $\partial\mathcal {R}$ and compute the length towards a cusp,
    • analyze the exponential map both with the common arc-length parametrization t in the sub-Riemannian manifold $(\mathrm{SE}(2),\mathrm{Ker}(-\sin\theta{\rm d}x +\cos\theta {\rm d}y), \mathcal{G}_{\xi}:=\xi^{2}(\cos\theta{\rm d}x+ \sin\theta {\rm d}y) \otimes(\cos\theta{\rm d}x+ \sin\theta{\rm d}y) + {\rm d}\theta \otimes{\rm d}\theta)$ and with spatial arc-length parametrization s in the plane $\mathbb{R}^{2}$ . Surprisingly, s-parametrization simplifies the exponential map, the curvature formulas, the cusp-surface, and the boundary value problem,
    • present a novel efficient algorithm solving the boundary value problem,
    • show that sub-Riemannian geodesics solve Petitot’s circle bundle model (cf. Petitot in J. Physiol. Paris 97:265–309, [2003]),
    • show a clear similarity with association field lines and sub-Riemannian geodesics.
      相似文献   

    17.
    The continuation method, well-established for the solution of nonlinear equations is extended to restricted optimization problems. Only the locally active restrictions are considered along the homotopy path. It is assumed that there are only finitely many critical points, i. e. that there are only finitely many changes of the index set of active restrictions. The globally convergent algorithm which we present proceeds in three stages:
    1. Within each stability region, the solution is computed by the classical continuation method.
    2. On the boundary of a stability region, a critical point \(\bar t\) is determined.
    3. A new active index set is determined when \(\bar t\) is passed.
    For the class of convex problems, the hypotheses for the convergence of the algorithm may be secured. The algorithm is applied to several examples.  相似文献   

    18.
    We present a uniform approach to problems involving lines in 3-space. This approach is based on mapping lines inR 3 into points and hyperplanes in five-dimensional projective space (Plücker space). We obtain new results on the following problems:
    1. Preprocessn triangles so as to answer efficiently the query: “Given a ray, which is the first triangle hit?” (Ray- shooting problem). We discuss the ray-shooting problem for both disjoint and nondisjoint triangles.
    2. Construct the intersection of two nonconvex polyhedra in an output sensitive way with asubquadratic overhead term.
    3. Construct the arrangement ofn intersecting triangles in 3-space in an output-sensitive way, with asubquadratic overhead term.
    4. Efficiently detect the first face hit by any ray in a set of axis-oriented polyhedra.
    5. Preprocessn lines (segments) so as to answer efficiently the query “Given two lines, is it possible to move one into the other without crossing any of the initial lines (segments)?” (Isotopy problem). If the movement is possible produce an explicit representation of it.
      相似文献   

    19.
    We give a self-reduction for the Circuit Evaluation problem (CircEval) and prove the following consequences.
    1. Amplifying size–depth lower bounds. If CircEval has Boolean circuits of n k size and n 1?δ depth for some k and δ, then for every ${\epsilon > 0}$ , there is a δ′ > 0 such that CircEval has circuits of ${n^{1 + \epsilon}}$ size and ${n^{1- \delta^{\prime}}}$ depth. Moreover, the resulting circuits require only ${\tilde{O}(n^{\epsilon})}$ bits of non-uniformity to construct. As a consequence, strong enough depth lower bounds for Circuit Evaluation imply a full separation of P and NC (even with a weak size lower bound).
    2. Lower bounds for quantified Boolean formulas. Let c, d > 1 and e < 1 satisfy c < (1 ? e d )/d. Either the problem of recognizing valid quantified Boolean formulas (QBF) is not solvable in TIME[n c ], or the Circuit Evaluation problem cannot be solved with circuits of n d size and n e depth. This implies unconditional polynomial-time uniform circuit lower bounds for solving QBF. We also prove that QBF does not have n c -time uniform NC circuits, for all c < 2.
      相似文献   

    20.
    We consider nonlinear boundary value problems with arbitrarily many solutionsuεC 2 [a, b]. In this paper an Algorithm will be established for a priori bounds \(\bar u,\bar d \in C[a,b]\) with the following properties:
    1. For every solutionu of the nonlinear problem we obtain $$\bar u(x) \leqslant u(x) \leqslant \bar u(x), - \bar d(x) \leqslant u'(x) \leqslant \bar d(x)$$ for any,xε[a, b].
    2. The bounds \(\bar u\) and % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9% vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x% fr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGabmizayaara% aaaa!36EE!\[\bar d\] are defined by the use of the functions exp, sin and cos.
    3. We use neither the knowledge of solutions nor the number of solutions.
      相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号