首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We consider planning with uncertainty in the initial state as a case study of incremental quantified Boolean formula (QBF) solving. We report on experiments with a workflow to incrementally encode a planning instance into a sequence of QBFs. To solve this sequence of successively constructed QBFs, we use our general-purpose incremental QBF solver DepQBF. Since the generated QBFs have many clauses and variables in common, our approach avoids redundancy both in the encoding phase as well as in the solving phase. We also present experiments with incremental preprocessing techniques that are based on blocked clause elimination (QBCE). QBCE allows to eliminate certain clauses from a QBF in a satisfiability preserving way. We implemented the QBCE-based techniques in DepQBF in three variants: as preprocessing, as inprocessing (which extends preprocessing by taking into account variable assignments that were fixed by the QBF solver), and as a novel dynamic approach where QBCE is tightly integrated in the solving process. For DepQBF, experimental results show that incremental QBF solving with incremental QBCE outperforms incremental QBF solving without QBCE, which in turn outperforms nonincremental QBF solving. For the first time we report on incremental QBF solving with incremental QBCE as inprocessing. Our results are the first empirical study of incremental QBF solving in the context of planning and motivate its use in other application domains.  相似文献   

2.
The Jones polynomial, discovered in 1984, is an important knot invariant in topology. Among its many connections to various mathematical and physical areas, it is known (due to Witten) to be intimately connected to Topological Quantum Field Theory ( ). The works of Freedman, Kitaev, Larsen and Wang provide an efficient simulation of by a quantum computer, and vice versa. These results implicitly imply the existence of an efficient (namely, polynomial) quantum algorithm that provides a certain additive approximation of the Jones polynomial at the fifth root of unity, e 2π i/5, and moreover, that this problem is -complete. Unfortunately, this important algorithm was never explicitly formulated. Moreover, the results of Freedman et al. are heavily based on , which makes the algorithm essentially inaccessible to computer scientists. We provide an explicit and simple polynomial quantum algorithm to approximate the Jones polynomial of an n strands braid with m crossings at any primitive root of unity e 2π i/k , where the running time of the algorithm is polynomial in m, n and k. Our algorithm is based, rather than on , on well known mathematical results (specifically, the path model representation of the braid group and the uniqueness of the Markov trace for the Temperley-Lieb algebra). By the results of Freedman et al., our algorithm solves a complete problem. Our algorithm works by encoding the local structure of the problem into the local unitary gates which are applied by the circuit. This structure is significantly different from previous quantum algorithms, which are mostly based on the Quantum Fourier transform. Since the results of the current paper were presented in their preliminary form, these ideas have been extended and generalized in several interesting directions. Most notably, Aharonov, Arad, Eban and Landau give a simplification and extension of these results that provides additive approximations for all points of the Tutte polynomial, including the Jones polynomial at any point, and the Potts model partition function at any temperature and any set of coupling strengths. We hope and believe that the ideas presented in this work will have other extensions and generalizations.  相似文献   

3.
Given a multivariate polynomial P(X 1,…,X n ) over a finite field , let N(P) denote the number of roots over . The modular root counting problem is given a modulus r, to determine N r (P)=N(P)mod r. We study the complexity of computing N r (P), when the polynomial is given as a sum of monomials. We give an efficient algorithm to compute N r (P) when the modulus r is a power of the characteristic of the field. We show that for all other moduli, the problem of computing N r (P) is -hard. We present some hardness results which imply that our algorithm is essentially optimal for prime fields. We show an equivalence between maximum-likelihood decoding for Reed-Solomon codes and a root-finding problem for symmetric polynomials. P. Gopalan’s and R.J Lipton’s research was supported by NSF grant CCR-3606B64. V. Guruswami’s research was supported in part by NSF grant CCF-0343672 and a Sloan Research Fellowship.  相似文献   

4.
In the context of reasoning on quantified Boolean formulas (QBFs), the extension of propositional logic with existential and universal quantifiers, it is beneficial to use preprocessing for solving QBF encodings of real-world problems. Preprocessing applies rewriting techniques that preserve (satisfiability) equivalence and that do not destroy the formula’s CNF structure. In many cases, preprocessing either solves a formula directly or modifies it such that information helpful to solvers becomes better visible and irrelevant information is hidden. The application of a preprocessor, however, prevented the extraction of proofs for the original formula in the past. Such proofs are required to independently validate the correctness of the preprocessor’s rewritings and the solver’s result as well as for the extraction of solutions in terms of Skolem functions. In particular for the important technique of universal expansion efficient proof checking and solution generation was not possible so far. This article presents a unified proof system with three simple rules based on quantified resolution asymmetric tautology (\(\mathsf {QRAT}\)). In combination with an extended version of universal reduction, we use this proof system to efficiently express current preprocessing techniques including universal expansion. Further, we develop an approach for extracting Skolem functions. We equip the preprocessor bloqqer with \(\mathsf {QRAT}\) proof logging and provide a proof checker for \(\mathsf {QRAT}\) proofs which is also able to extract Skolem functions.  相似文献   

5.
A self-adaptive multi-engine solver for quantified Boolean formulas   总被引:1,自引:0,他引:1  
In this paper we study the problem of engineering a robust solver for quantified Boolean formulas (QBFs), i.e., a tool that can efficiently solve formulas across different problem domains without the need for domain-specific tuning. The paper presents two main empirical results along this line of research. Our first result is the development of a multi-engine solver, i.e., a tool that selects among its reasoning engines the one which is more likely to yield optimal results. In particular, we show that syntactic QBF features can be correlated to the performances of existing QBF engines across a variety of domains. We also show how a multi-engine solver can be obtained by carefully picking state-of-the-art QBF solvers as basic engines, and by harnessing inductive reasoning techniques to learn engine-selection policies. Our second result is the improvement of our multi-engine solver with the capability of updating the learned policies when they fail to give good predictions. In this way the solver becomes also self-adaptive, i.e., able to adjust its internal models when the usage scenario changes substantially. The rewarding results obtained in our experiments show that our solver AQME—Adaptive QBF Multi-Engine—can be more robust and efficient than state-of-the-art single-engine solvers, even when it is confronted with previously uncharted formulas and competitors.  相似文献   

6.
Most state-of-the-art approaches for Satisfiability Modulo Theories $(SMT(\mathcal{T}))$ rely on the integration between a SAT solver and a decision procedure for sets of literals in the background theory $\mathcal{T} (\mathcal{T}{\text {-}}solver)$ . Often $\mathcal{T}$ is the combination $\mathcal{T}_1 \cup \mathcal{T}_2$ of two (or more) simpler theories $(SMT(\mathcal{T}_1 \cup \mathcal{T}_2))$ , s.t. the specific ${\mathcal{T}_i}{\text {-}}solvers$ must be combined. Up to a few years ago, the standard approach to $SMT(\mathcal{T}_1 \cup \mathcal{T}_2)$ was to integrate the SAT solver with one combined $\mathcal{T}_1 \cup \mathcal{T}_2{\text {-}}solver$ , obtained from two distinct ${\mathcal{T}_i}{\text {-}}solvers$ by means of evolutions of Nelson and Oppen’s (NO) combination procedure, in which the ${\mathcal{T}_i}{\text {-}}solvers$ deduce and exchange interface equalities. Nowadays many state-of-the-art SMT solvers use evolutions of a more recent $SMT(\mathcal{T}_1 \cup \mathcal{T}_2)$ procedure called Delayed Theory Combination (DTC), in which each ${\mathcal{T}_i}{\text {-}}solver$ interacts directly and only with the SAT solver, in such a way that part or all of the (possibly very expensive) reasoning effort on interface equalities is delegated to the SAT solver itself. In this paper we present a comparative analysis of DTC vs. NO for $SMT(\mathcal{T}_1 \cup \mathcal{T}_2)$ . On the one hand, we explain the advantages of DTC in exploiting the power of modern SAT solvers to reduce the search. On the other hand, we show that the extra amount of Boolean search required to the SAT solver can be controlled. In fact, we prove two novel theoretical results, for both convex and non-convex theories and for different deduction capabilities of the ${\mathcal{T}_i}{\text {-}}solvers$ , which relate the amount of extra Boolean search required to the SAT solver by DTC with the number of deductions and case-splits required to the ${\mathcal{T}_i}{\text {-}}solvers$ by NO in order to perform the same tasks: (i) under the same hypotheses of deduction capabilities of the ${\mathcal{T}_i}{\text {-}}solvers$ required by NO, DTC causes no extra Boolean search; (ii) using ${\mathcal{T}_i}{\text {-}}solvers$ with limited or no deduction capabilities, the extra Boolean search required can be reduced down to a negligible amount by controlling the quality of the $\mathcal{T}$ -conflict sets returned by the ${\mathcal{T}_i}{\text {-}}solvers$ .  相似文献   

7.
Constant-time distributed dominating set approximation   总被引:1,自引:0,他引:1  
Finding a small dominating set is one of the most fundamental problems of classical graph theory. In this paper, we present a new fully distributed approximation algorithm based on LP relaxation techniques. For an arbitrary, possibly constant parameter k and maximum node degree , our algorithm computes a dominating set of expected size in rounds. Each node has to send messages of size . This is the first algorithm which achieves a non-trivial approximation ratio in a constant number of rounds.Received: 9 September 2003, Accepted: 2 September 2004, Published online: 13 January 2005The work presented in this paper was supported (in part) by the National Competence Center in Research on Mobile Information and Communication Systems (NCCR-MICS), a center supported by the Swiss National Science Foundation under grant number 5005-67322.  相似文献   

8.
Krivine presents the  machine, which produces weak head normal form results. Sestoft introduces several call-by-need variants of the  machine that implement result sharing via pushing update markers on the stack in a way similar to the TIM and the STG machine. When a sequence of consecutive markers appears on the stack, all but the first cause redundant updates. Improvements related to these sequences have dealt with either the consumption of the markers or the removal of the markers once they appear. Here we present an improvement that eliminates the production of marker sequences of length greater than one. This improvement results in the  machine, a more space and time efficient variant of . We then apply the classic optimization of short-circuiting operand variable dereferences to create the call-by-need  machine. Finally, we combine the two improvements in the  machine. On our benchmarks this machine uses half the stack space, performs one quarter as many updates, and executes between 27% faster and 17% slower than our ℒ variant of Sestoft’s lazy Krivine machine. More interesting is that on one benchmark ℒ, , and consume unbounded space, but consumes constant space. Our comparisons to Sestoft’s Mark 2 machine are not exact, however, since we restrict ourselves to unpreprocessed closed lambda terms. Our variant of his machine does no environment trimming, conversion to deBruijn-style variable access, and does not provide basic constants, data type constructors, or the recursive let. (The Y combinator is used instead.)  相似文献   

9.
In Free poset Boolean algebra F(P ), uniqueness of normal form of non-zero elements is proved and the notion of support of a non-zero element is, therefore, well defined. An Inclusion–Exclusion-like formula is given by defining, for each non-zero element x, using support of xF(P ) in a very natural way.   相似文献   

10.
11.
We consider a semi on-line version of the multiprocessor scheduling problem on three processors, where the total size of the tasks is known in advance. We prove a lower bound on the competitive ratio of any algorithm and propose a simple algorithm with competitive ratio equal to 1.5. The performance is improved to by a preprocessing strategy. The latter algorithm is only 2% away from the lower bound. Z. Tuza is supported in part by the Hungarian Scientific Research Fund, OTKA grant T-049613.  相似文献   

12.
Roie Zivan  Amnon Meisels 《Constraints》2006,11(2-3):179-197
An algorithm that performs asynchronous backtracking on distributed , with dynamic ordering of agents is proposed, . Agents propose reorderings of lower priority agents and send these proposals whenever they send assignment messages. Changes of ordering triggers a different computation of . The dynamic ordered asynchronous backtracking algorithm uses polynomial space, similarly to standard . The algorithm with three different ordering heuristics is compared to standard on randomly generated . A Nogood-triggered heuristic, inspired by dynamic backtracking, is found to outperform static order by a large factor in run-time and improve the network load.  相似文献   

13.
In [10] it was recently shown that that is the existence of transparent long proofs for was established. The latter denotes the class of real number decision problems verifiable in polynomial time as introduced by Blum et al. [6]. The present paper is devoted to the question what impact a potential full real number theorem would have on approximation issues in the BSS model of computation. We study two natural optimization problems in the BSS model. The first, denoted by MAX-QPS, is related to polynomial systems; the other, MAX-q-CAP, deals with algebraic circuits. Our main results combine the PCP framework over with approximation issues for these two problems. We also give a negative approximation result for a variant of the MAX-QPS problem.  相似文献   

14.
Computer graphics is ostensibly based on projective geometry. The graphics pipeline—the sequence of functions applied to 3D geometric primitives to determine a 2D image—is described in the graphics literature as taking the primitives from Euclidean to projective space, and then back to Euclidean space. This is a weak foundation for computer graphics. An instructor is at a loss: one day entering the classroom and invoking the established and venerable theory of projective geometry while asserting that projective spaces are not separable, and then entering the classroom the following week to tell the students that the standard graphics pipeline performs clipping not in Euclidean, but in projective space—precisely the operation (deciding sidedness, which depends on separability) that was deemed nonsensical. But there is no need to present Blinn and Newell’s algorithm (Comput. Graph. 12, 245–251, 1978; Commun. ACM 17, 32–42, 1974)—the crucial clipping step in the graphics pipeline and, perhaps, the most original knowledge a student learns in a fourth-year computer graphics class—as a clever trick that just works. Jorge Stolfi described in 1991 oriented projective geometry. By declaring the two vectors and distinct, Blinn and Newell were already unknowingly working in oriented projective space. This paper presents the graphics pipeline on this stronger foundation.
Sherif GhaliEmail:
  相似文献   

15.
16.
In this paper, we consider the $(\in_{\gamma},\in_{\gamma} \vee \; \hbox{q}_{\delta})$ -fuzzy and $(\overline{\in}_{\gamma},\overline{\in}_{\gamma} \vee \; \overline{\hbox{q}}_{\delta})$ -fuzzy subnear-rings (ideals) of a near-ring. Some new characterizations are also given. In particular, we introduce the concepts of (strong) prime $(\in_{\gamma},\in_{\gamma} \vee \; \hbox{q}_{\delta})$ -fuzzy ideals of near-rings and discuss the relationship between strong prime $(\in_{\gamma},\in_{\gamma} \vee \; \hbox{q}_{\delta})$ -fuzzy ideals and prime $(\in_{\gamma},\in_{\gamma} \vee \; \hbox{q}_{\delta})$ -fuzzy ideals of near-rings.  相似文献   

17.
In the uniform circuit model of computation, the width of a boolean circuit exactly characterizes the “space” complexity of the computed function. Looking for a similar relationship in Valiant’s algebraic model of computation, we propose width of an arithmetic circuit as a possible measure of space. In the uniform setting, we show that our definition coincides with that of VPSPACE at polynomial width. We introduce the class VL as an algebraic variant of deterministic log-space L; VL is a subclass of VP. Further, to define algebraic variants of non-deterministic space-bounded classes, we introduce the notion of “read-once” certificates for arithmetic circuits. We show that polynomial-size algebraic branching programs (an algebraic analog of NL) can be expressed as read-once exponential sums over polynomials in ${{\sf VL}, {\it i.e.}\quad{\sf VBP} \in \Sigma^R \cdot {\sf VL}}$ . Thus, read-once exponential sums can be viewed as a reasonable way of capturing space-bounded non-determinism. We also show that Σ R ·VBPVBP, i.e. VBPs are stable under read-once exponential sums. Though the best upper bound we have for Σ R ·VL itself is VNP, we can obtain better upper bounds for width-bounded multiplicatively disjoint (md-) circuits. Without the width restriction, md- arithmetic circuits are known to capture all of VP. We show that read-once exponential sums over md- constant-width arithmetic circuits are within VP and that read-once exponential sums over md- polylog-width arithmetic circuits are within VQP. We also show that exponential sums of a skew formula cannot represent the determinant polynomial.  相似文献   

18.
Recently, due to the rapid growth of electronic data having graph structures such as HTML and XML texts and chemical compounds, many researchers have been interested in data mining and machine learning techniques for finding useful patterns from graph-structured data (graph data). Since graph data contain a huge number of substructures and it tends to be computationally expensive to decide whether or not such data have given structural features, graph mining problems face computational difficulties. Let be a graph class which satisfies a connected hereditary property and contains infinitely many different biconnected graphs, and for which a special kind of the graph isomorphism problem can be computed in polynomial time. In this paper, we consider learning and mining problems for  . Firstly, we define a new graph pattern, which is called a block preserving graph pattern (bp-graph pattern) for  . Secondly, we present a polynomial time algorithm for deciding whether or not a given bp-graph pattern matches a given graph in  . Thirdly, by giving refinement operators over bp-graph patterns, we present a polynomial time algorithm for finding a minimally generalized bp-graph pattern for  . Outerplanar graphs are planar graphs which can be embedded in the plane in such a way that all of vertices lie on the outer boundary. Many pharmacologic chemical compounds are known to be represented by outerplanar graphs. The class of connected outerplanar graphs satisfies the above conditions for  . Next, we propose two incremental polynomial time algorithms for enumerating all frequent bp-graph patterns with respect to a given finite set of graphs in  . Finally, by reporting experimental results obtained by applying the two graph mining algorithms to a subset of the NCI dataset, we evaluate the performance of the two graph mining algorithms.  相似文献   

19.
Quantified Boolean formulae (QBF) allow compact encoding of many decision problems. Their importance motivated the development of fast QBF solvers. Certifying the results of a QBF solver not only ensures correctness, but also enables certain synthesis and verification tasks. To date the certificate of a true formula can be in the form of either a syntactic cube-resolution proof or a semantic Skolem-function model whereas that of a false formula is only in the form of a syntactic clause-resolution proof. The semantic certificate for a false QBF is missing, and the syntactic and semantic certificates are somewhat unrelated. This paper identifies the missing Herbrand-function countermodel for false QBF, and strengthens the connection between syntactic and semantic certificates by showing that, given a true QBF, its Skolem-function model is derivable from its cube-resolution proof of satisfiability as well as from its clause-resolution proof of unsatisfiability under formula negation. Consequently Skolem-function derivation can be decoupled from special Skolemization-based solvers and computed from standard search-based ones. Experimental results show strong benefits of the new method.  相似文献   

20.
We consider the problem of model selection in the batch (offline, non-interactive) reinforcement learning setting when the goal is to find an action-value function with the smallest Bellman error among a countable set of candidates functions. We propose a complexity regularization-based model selection algorithm, $\ensuremath{\mbox{\textsc {BErMin}}}$ , and prove that it enjoys an oracle-like property: the estimator??s error differs from that of an oracle, who selects the candidate with the minimum Bellman error, by only a constant factor and a small remainder term that vanishes at a parametric rate as the number of samples increases. As an application, we consider a problem when the true action-value function belongs to an unknown member of a nested sequence of function spaces. We show that under some additional technical conditions $\ensuremath{\mbox{\textsc {BErMin}}}$ leads to a procedure whose rate of convergence, up to a constant factor, matches that of an oracle who knows which of the nested function spaces the true action-value function belongs to, i.e., the procedure achieves adaptivity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号