首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We study the problem of computing canonical forms for graphs and hypergraphs under Abelian group action and show tight complexity bounds. Our approach is algebraic. We transform the problem of computing canonical forms for graphs to the problem of computing canonical forms for associated algebraic structures, and we develop parallel algorithms for these associated problems.
  1. In our first result we show that the problem of computing canonical labelings for hypergraphs of color class size 2 is logspace Turing equivalent to solving a system of linear equations over the field $\mathbb {F} _{2}$ . This implies a deterministic NC 2 algorithm for the problem.
  2. Similarly, we show that the problem of canonical labeling graphs and hypergraphs under arbitrary Abelian permutation group action is fairly well characterized by the problem of computing the integer determinant. In particular, this yields deterministic NC 3 and randomized NC 2 algorithms for the problem.
  相似文献   

2.
We report progress on the NL versus UL problem.
  • We show that counting the number of s-t paths in graphs where the number of s-v paths for any v is bounded by a polynomial can be done in FUL: the unambiguous log-space function class. Several new upper bounds follow from this including ${{{ReachFewL} \subseteq {UL}}}$ and ${{{LFew} \subseteq {UL}^{FewL}}}$
  • We investigate the complexity of min-uniqueness—a central notion in studying the NL versus UL problem. In this regard we revisit the class OptL[log n] and introduce UOptL[log n], an unambiguous version of OptL[log n]. We investigate the relation between UOptL[log n] and other existing complexity classes.
  • We consider the unambiguous hierarchies over UL and UOptL[log n]. We show that the hierarchy over UOptL[log n] collapses. This implies that ${{{ULH} \subseteq {L}^{{promiseUL}}}}$ thus collapsing the UL hierarchy.
  • We show that the reachability problem over graphs embedded on 3 pages is complete for NL. This contrasts with the reachability problem over graphs embedded on 2 pages, which is log-space equivalent to the reachability problem in planar graphs and hence is in UL.
  •   相似文献   

    3.
    In the uniform circuit model of computation, the width of a boolean circuit exactly characterizes the “space” complexity of the computed function. Looking for a similar relationship in Valiant’s algebraic model of computation, we propose width of an arithmetic circuit as a possible measure of space. In the uniform setting, we show that our definition coincides with that of VPSPACE at polynomial width. We introduce the class VL as an algebraic variant of deterministic log-space L; VL is a subclass of VP. Further, to define algebraic variants of non-deterministic space-bounded classes, we introduce the notion of “read-once” certificates for arithmetic circuits. We show that polynomial-size algebraic branching programs (an algebraic analog of NL) can be expressed as read-once exponential sums over polynomials in ${{\sf VL}, {\it i.e.}\quad{\sf VBP} \in \Sigma^R \cdot {\sf VL}}$ . Thus, read-once exponential sums can be viewed as a reasonable way of capturing space-bounded non-determinism. We also show that Σ R ·VBPVBP, i.e. VBPs are stable under read-once exponential sums. Though the best upper bound we have for Σ R ·VL itself is VNP, we can obtain better upper bounds for width-bounded multiplicatively disjoint (md-) circuits. Without the width restriction, md- arithmetic circuits are known to capture all of VP. We show that read-once exponential sums over md- constant-width arithmetic circuits are within VP and that read-once exponential sums over md- polylog-width arithmetic circuits are within VQP. We also show that exponential sums of a skew formula cannot represent the determinant polynomial.  相似文献   

    4.
    We show that the model checking problem for intuitionistic propositional logic with one variable is complete for logspace-uniform AC 1. For superintuitionistic logics with one variable, we obtain NC 1-completeness for the model checking problem.  相似文献   

    5.
    Approximation Algorithms for the Directed k-Tour and k-Stroll Problems   总被引:1,自引:0,他引:1  
    We consider two natural generalizations of the Asymmetric Traveling Salesman problem: the k-Stroll and the k-Tour problems. The input to the k-Stroll problem is a directed n-vertex graph with nonnegative edge lengths, an integer k, as well as two special vertices s and t. The goal is to find a minimum-length s-t walk, containing at least k distinct vertices (including the endpoints s,t). The k-Tour problem can be viewed as a special case of k-Stroll, where s=t. That is, the walk is required to be a tour, containing some pre-specified vertex s. When k=n, the k-Stroll problem becomes equivalent to Asymmetric Traveling Salesman Path, and k-Tour to Asymmetric Traveling Salesman. Our main result is a polylogarithmic approximation algorithm for the k-Stroll problem. Prior to our work, only bicriteria (O(log2 k),3)-approximation algorithms have been known, producing walks whose length is bounded by 3OPT, while the number of vertices visited is Ω(k/log2 k). We also show a simple O(log2 n/loglogn)-approximation algorithm for the k-Tour problem. The best previously known approximation algorithms achieved min(O(log3 k),O(log2 n?logk/loglogn)) approximation in polynomial time, and O(log2 k) approximation in quasipolynomial time.  相似文献   

    6.
    The class of polynomials computable by polynomial size log-depth arithmetic circuits (VNC 1) is known to be computable by constant width polynomial degree circuits (VsSC 0), but whether the converse containment holds is an open problem. As a partial answer to this question, we give a construction which shows that syntactically multilinear circuits of constant width and polynomial degree can be depth-reduced, which in our notation shows that sm-VsSC 0 ${\subseteq}$ ? sm-VNC 1. We further strengthen this inclusion, by giving a separate construction that provides a width-efficient simulation for constant width syntactically multilinear circuits by constant width syntactically multilinear algebraic branching programs; in our notation, sm-VsSC 0 ${\subseteq}$ ? sm-VBWBP. We then focus on polynomial size syntactically multilinear circuits and study relationships between classes of functions obtained by imposing various resource (width, depth, degree) restrictions on these circuits. Along the way, we also observe a characterization of the class NC 1 in terms of a restricted class of planar branching programs of polynomial size. Finally, in contrast to the general case, we report closure and stability of coefficient functions for the syntactically multilinear classes studied in this paper.  相似文献   

    7.
    In this paper, we show how to exploit the structure of some automata-based construction to efficiently solve the LTL synthesis problem. We focus on a construction proposed in Schewe and Finkbeiner that reduces the synthesis problem to a safety game, which can then be solved by computing the solution of the classical fixpoint equation νX.SafeCPre(X), where CPre(X) are the controllable predecessors of X. We have shown in previous works that the sets computed during the fixpoint algorithm can be equipped with a partial order that allows one to represent them very compactly, by the antichain of their maximal elements. However the computation of CPre(X) cannot be done in polynomial time when X is represented by an antichain (unless P = NP). This motivates the use of SAT solvers to compute CPre(X). Also, we show that the CPre operator can be replaced by a weaker operator CPre crit where the adversary is restricted to play a subset of critical signals. We show that the fixpoints of the two operators coincide, and so, instead of applying iteratively CPre, we can apply iteratively CPre crit. In practice, this leads to important improvements on previous LTL synthesis methods. The reduction to SAT problems and the weakening of the CPre operator into CPre crit and their performance evaluations are new.  相似文献   

    8.
    We strengthen a previously known connection between the size complexity of two-way finite automata ( ) and the space complexity of Turing machines (tms). Specifically, we prove that
  • every s-state has a poly(s)-state that agrees with it on all inputs of length ≤s if and only if NL?L/poly, and
  • every s-state has a poly(s)-state that agrees with it on all inputs of length ≤2 s if and only if NLL?LL/polylog.
  • Here, and are the deterministic and nondeterministic , NL and L/poly are the standard classes of languages recognizable in logarithmic space by nondeterministic tms and by deterministic tms with access to polynomially long advice, and NLL and LL/polylog are the corresponding complexity classes for space O(loglogn) and advice length poly(logn). Our arguments strengthen and extend an old theorem by Berman and Lingas and can be used to obtain variants of the above statements for other modes of computation or other combinations of bounds for the input length, the space usage, and the length of advice.  相似文献   

    9.
    We explore relationships between circuit complexity, the complexity of generating circuits, and algorithms for analyzing circuits. Our results can be divided into two parts:
    1. Lower bounds against medium-uniform circuits. Informally, a circuit class is “medium uniform” if it can be generated by an algorithmic process that is somewhat complex (stronger than LOGTIME) but not infeasible. Using a new kind of indirect diagonalization argument, we prove several new unconditional lower bounds against medium-uniform circuit classes, including: ? For all k, P is not contained in P-uniform SIZE(n k ). That is, for all k, there is a language \({L_k \in {\textsf P}}\) that does not have O(n k )-size circuits constructible in polynomial time. This improves Kannan’s lower bound from 1982 that NP is not in P-uniform SIZE(n k ) for any fixed k. ? For all k, NP is not in \({{\textsf P}^{\textsf NP}_{||}-{\textsf {uniform SIZE}}(n^k)}\) .This also improves Kannan’s theorem, but in a different way: the uniformity condition on the circuits is stronger than that on the language itself. ? For all k, LOGSPACE does not have LOGSPACE-uniform branching programs of size n k .
    2. Eliminating non-uniformity and (non-uniform) circuit lower bounds. We complement these results by showing how to convert any potential simulation of LOGTIME-uniform NC 1 in ACC 0/poly or TC 0/poly into a medium-uniform simulation using small advice. This lemma can be used to simplify the proof that faster SAT algorithms imply NEXP circuit lower bounds and leads to the following new connection: ? Consider the following task: given a TC 0 circuit C of n O(1) size, output yes when C is unsatisfiable, and output no when C has at least 2 n-2 satisfying assignments. (Behavior on other inputs can be arbitrary.) Clearly, this problem can be solved efficiently using randomness. If this problem can be solved deterministically in 2 n-ω(log n) time, then \({{\textsf{NEXP}} \not \subset {\textsf{TC}}^0/{\rm poly}}\) .
    Another application is to derandomize randomized TC 0 simulations of NC 1 on almost all inputs: ?Suppose \({{\textsf{NC}}^1 \subseteq {\textsf{BPTC}}^0}\) . Then, for every ε > 0 and every language L in NC 1, there is a LOGTIME?uniform TC 0 circuit family of polynomial size recognizing a language L′ such that L and L′ differ on at most \({2^{n^{\epsilon}}}\) inputs of length n, for all n.  相似文献   

    10.
    Regular expressions (RE) are an algebraic formalism for expressing regular languages, widely used in string search and as a specification language in verification. In this paper, we introduce and investigate visibly rational expressions (VRE), an extension of RE for the class of visibly pushdown languages (VPL). We show that VRE capture precisely the class of VPL. Moreover, we identify an equally expressive fragment of VRE which admits a quadratic time compositional translation into the automata acceptors of VPL. We also prove that, for this fragment, universality, inclusion and language equivalence are EXPTIME-complete. Finally, we provide an extension of VRE for VPL over infinite words.  相似文献   

    11.
    Zeev Nutov 《Algorithmica》2014,70(2):340-364
    We consider Degree Constrained Survivable Network problems. For the directed Degree Constrained k -Edge-Outconnected Subgraph problem, we slightly improve the best known approximation ratio, by a simple proof. Our main contribution is giving a framework to handle node-connectivity degree constrained problems with the iterative rounding method. In particular, for the degree constrained versions of the Element-Connectivity Survivable Network problem on undirected graphs, and of the k -Outconnected Subgraph problem on both directed and undirected graphs, our algorithm computes a solution J of cost O(logk) times the optimal, with degrees O(2 k )?b(v). Similar result are obtained for the k -Connected Subgraph problem. The latter improves on the only degree approximation O(klogn)?b(v) in O(n k ) time on undirected graphs by Feder, Motwani, and Zhu.  相似文献   

    12.
    We introduce two new natural decision problems, denoted as ? RATIONAL NASH and ? IRRATIONAL NASH, pertinent to the rationality and irrationality, respectively, of Nash equilibria for (finite) strategic games. These problems ask, given a strategic game, whether or not it admits (i) a rational Nash equilibrium where all probabilities are rational numbers, and (ii) an irrational Nash equilibrium where at least one probability is irrational, respectively. We are interested here in the complexities of ? RATIONAL NASH and ? IRRATIONAL NASH. Towards this end, we study two other decision problems, denoted as NASH-EQUIVALENCE and NASH-REDUCTION, pertinent to some mutual properties of the sets of Nash equilibria of two given strategic games with the same number of players. The problem NASH-EQUIVALENCE asks whether or not the two sets of Nash equilibria coincide; we identify a restriction of its complementary problem that witnesses ? RATIONAL NASH. The problem NASH-REDUCTION asks whether or not there is a so called Nash reduction: a suitable map between corresponding strategy sets of players that yields a Nash equilibrium of the former game from a Nash equilibrium of the latter game; we identify a restriction of NASH-REDUCTION that witnesses ? IRRATIONAL NASH. As our main result, we provide two distinct reductions to simultaneously show that (i) NASH-EQUIVALENCE is co- $\mathcal{NP}$ -hard and ? RATIONAL NASH is $\mathcal{NP}$ -hard, and (ii) NASH-REDUCTION and ? IRRATIONAL NASH are both $\mathcal{NP}$ -hard, respectively. The reductions significantly extend techniques previously employed by Conitzer and Sandholm (Proceedings of the 18th Joint Conference on Artificial Intelligence, pp. 765–771, 2003; Games Econ. Behav. 63(2), 621–641, 2008).  相似文献   

    13.
    Over the last decade, great progress has been made in developing practical transactional memory (TM) implementations, but relatively little attention has been paid to precisely specifying what it means for them to be correct, or formally proving that they are. In this paper, we present TMS1 (Transactional Memory Specification 1), a precise specification of correct behaviour of a TM runtime library. TMS1 targets TM runtimes used to implement transactional features in an unmanaged programming language such as C or C++. In such contexts, even transactions that ultimately abort must observe consistent states of memory; otherwise, unrecoverable errors such as divide-by-zero may occur before a transaction aborts, even in a correct program in which the error would not be possible if transactions were executed atomically. We specify TMS1 precisely using an I/O automaton (IOA). This approach enables us to also model TM implementations using IOAs and to construct fully formal and machine-checked correctness proofs for them using well established proof techniques and tools. We outline key requirements for a TM system. To avoid precluding any implementation that satisfies these requirements, we specify TMS1 to be as general as we can, consistent with these requirements. The cost of such generality is that the condition does not map closely to intuition about common TM implementation techniques, and thus it is difficult to prove that such implementations satisfy the condition. To address this concern, we present TMS2, a more restrictive condition that more closely reflects intuition about common TM implementation techniques. We present a simulation proof that TMS2 implements TMS1, thus showing that to prove that an implementation satisfies TMS1, it suffices to prove that it satisfies TMS2. We have formalised and verified this proof using the PVS specification and verification system.  相似文献   

    14.
    We study the problem of packing element-disjoint Steiner trees in graphs. We are given a graph and a designated subset of terminal nodes, and the goal is to find a maximum cardinality set of element-disjoint trees such that each tree contains every terminal node. An element means a non-terminal node or an edge. (Thus, each non-terminal node and each edge must be in at most one of the trees.) We show that the problem is APX-hard when there are only three terminal nodes, thus answering an open question. Our main focus is on the special case when the graph is planar. We show that the problem of finding two element-disjoint Steiner trees in a planar graph is NP-hard. Similarly, the problem of finding two edge-disjoint Steiner trees in a planar graph is NP-hard. We design an algorithm for planar graphs that achieves an approximation guarantee close to 2. In fact, given a planar graph that is k element-connected on the terminals (k is an upper bound on the number of element-disjoint Steiner trees), the algorithm returns $\lfloor\frac{k}{2} \rfloor-1$ element-disjoint Steiner trees. Using this algorithm, we get an approximation algorithm for the edge-disjoint version of the problem on planar graphs that improves on the previous approximation guarantees. We also show that the natural LP relaxation of the planar problem has an integrality ratio approaching?2.  相似文献   

    15.
    16.
    The Contractibility problem takes as input two graphs G and H, and the task is to decide whether H can be obtained from G by a sequence of edge contractions. The Induced Minor and Induced Topological Minor problems are similar, but the first allows both edge contractions and vertex deletions, whereas the latter allows only vertex deletions and vertex dissolutions. All three problems are NP-complete, even for certain fixed graphs H. We show that these problems can be solved in polynomial time for every fixed H when the input graph G is chordal. Our results can be considered tight, since these problems are known to be W[1]-hard on chordal graphs when parameterized by the size of H. To solve Contractibility and Induced Minor, we define and use a generalization of the classic Disjoint Paths problem, where we require the vertices of each of the k paths to be chosen from a specified set. We prove that this variant is NP-complete even when k=2, but that it is polynomial-time solvable on chordal graphs for every fixed k. Our algorithm for Induced Topological Minor is based on another generalization of Disjoint Paths called Induced Disjoint Paths, where the vertices from different paths may no longer be adjacent. We show that this problem, which is known to be NP-complete when k=2, can be solved in polynomial time on chordal graphs even when k is part of the input. Our results fit into the general framework of graph containment problems, where the aim is to decide whether a graph can be modified into another graph by a sequence of specified graph operations. Allowing combinations of the four well-known operations edge deletion, edge contraction, vertex deletion, and vertex dissolution results in the following ten containment relations: (induced) minor, (induced) topological minor, (induced) subgraph, (induced) spanning subgraph, dissolution, and contraction. Our results, combined with existing results, settle the complexity of each of the ten corresponding containment problems on chordal graphs.  相似文献   

    17.
    Zeev Nutov 《Algorithmica》2012,63(1-2):398-410
    We consider the (undirected) Node Connectivity Augmentation (NCA) problem: given a graph J=(V,E J ) and connectivity requirements $\{r(u,v): u,v \in V\}$ , find a minimum size set I of new edges (any edge is allowed) such that the graph JI contains r(u,v) internally-disjoint uv-paths, for all u,vV. In Rooted NCA there is sV such that r(u,v)>0 implies u=s or v=s. For large values of k=max? u,vV r(u,v), NCA is at least as hard to approximate as Label-Cover and thus it is unlikely to admit an approximation ratio polylogarithmic in k. Rooted NCA is at least as hard to approximate as Hitting-Set. The previously best approximation ratios for the problem were O(kln?n) for NCA and O(ln?n) for Rooted NCA. In this paper we give an approximation algorithm with ratios O(kln?2 k) for NCA and O(ln?2 k) for Rooted NCA. This is the first approximation algorithm with ratio independent of?n, and thus is a constant for any fixed k. Our algorithm is based on the following new structural result which is of independent interest. If $\mathcal{D}$ is a set of node pairs in a graph?J, then the maximum degree in the hypergraph formed by the inclusion minimal tight sets separating at least one pair in $\mathcal{D}$ is O(? 2), where ? is the maximum connectivity in J of a pair in $\mathcal{D}$ .  相似文献   

    18.
    We study a simple technique, originally presented by Herlihy (ACM Trans. Program. Lang. Syst. 15(5):745–770, 1993), for executing concurrently, in a wait-free manner, blocks of code that have been programmed for sequential execution and require significant synchronization in order to be performed in parallel. We first present an implementation of this technique, called Sim, which employs a collect object. We describe a simple implementation of a collect object from a single shared object that supports atomic Add (or XOR) in addition to read; this implementation has step complexity O(1). By plugging in to Sim this implementation, Sim exhibits constant step complexity as well. This allows us to derive lower bounds on the step complexity of implementations of several shared objects, like Add, XOR, collect, and snapshot objects, from LL/SC objects. We then present a practical version of Sim, called PSim, which is implemented in a real shared-memory machine. From a theoretical perspective, PSim has worse step complexity than Sim, its theoretical analog; in practice though, we experimentally show that PSim is highly-efficient: it outperforms several state-of-the-art lock-based and lock-free synchronization techniques, and this given that it is wait-free, i.e. that it satisfies a stronger progress condition than all the algorithms that it outperforms. We have used PSim to get highly-efficient wait-free implementations of stacks and queues.  相似文献   

    19.
    We consider the problem of finding a sparse multiple of a polynomial. Given f??F[x] of degree d over a field F, and a desired sparsity t, our goal is to determine if there exists a multiple h??F[x] of f such that h has at most t non-zero terms, and if so, to find such an h. When F=? and t is constant, we give an algorithm which requires polynomial-time in d and the size of coefficients in h. When F is a finite field, we show that the problem is at least as hard as determining the multiplicative order of elements in an extension field of F (a problem thought to have complexity similar to that of factoring integers), and this lower bound is tight when t=2.  相似文献   

    20.
    We consider transactional memory contention management in the context of balanced workloads, where if a transaction is writing, the number of write operations it performs is a constant fraction of its total reads and writes. We explore the theoretical performance boundaries of contention management in balanced workloads from the worst-case perspective by presenting and analyzing two new polynomial time contention management algorithms. We analyze the performance of a contention management algorithm by comparison with an optimal offline contention management algorithm to provide a competitive ratio. The first algorithm Clairvoyant is $O(\sqrt{s})$ -competitive, where s is the number of shared resources. This algorithm depends on explicitly knowing the conflict graph at each time step of execution. The second algorithm Non-Clairvoyant is $O(\sqrt{s} \cdot \log n)$ -competitive, with high probability, which is only a O(log?n) factor worse, but does not require knowledge of the conflict graph, where n is the number of transactions. Both of these algorithms are greedy. We also prove that the performance of Clairvoyant is close to optimal, since there is no polynomial time contention management algorithm for the balanced transaction scheduling problem that is better than $O((\sqrt{s})^{1-\varepsilon})$ -competitive for any constant ε>0, unless NP?ZPP. To our knowledge, these results are significant improvements over the best previously known O(s) competitive ratio bound.  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号