首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a generic sound δ-rule, based on a quite general method for the construction of Skolem terms, which can be used as a common framework for proving the soundness of known variants of the δ-rule, and we compare their relative effectiveness. Attempts to instantiate some of the δ-rules present in the literature within our framework allowed us to pinpoint unsoundness problems for two of them. In both cases, we propose revised versions that are proved sound by reducing them within our framework. This paper is based on results preliminarily presented in [5].  相似文献   

2.
Some search problems are most directly specified by conjunctions of (sets of) disjunctions of pseudo-Boolean (PB) constraints. We study a logic PL PB whose formulas are of such form, and design local-search methods to compute models of PL PB theories. In our approach we view a PL PB theory T as a data structure, a concise representation of a certain propositional conjunctive normal form (CNF) theory cl(T) logically equivalent to T. The key idea is an observation that parameters needed by local-search algorithms for CNF theories, such as walksat, can be estimated on the basis of T without the need to compute cl(T) explicitly. We compare our methods to a local-search algorithm wsat(oip). The experiments demonstrate that our approach performs better. In order for wsat(oip) to handle arbitrary PL PB theories, it is necessary to represent disjunctions of PB constraints by sets of PB constraints, which often increases the size of the theory dramatically. A better performance of our method underscores the importance of developing solvers that work directly on PL PB theories. This paper combines and extends results included in conference papers [14, 15].  相似文献   

3.
We propose a refinement approach to language emptiness, which is based on the enumeration and the successive refinements of SCCs on over-approximations of the exact system. Our algorithm is compositional: It performs as much computation as possible on the abstract systems, and prunes uninteresting part of the search space as early as possible. It decomposes the state space disjunctively so that each state subset can be checked in isolation to decide language emptiness for the given system. We prove that the strength of an SCC or a set of SCCs decreases monotonically with composition. This allows us to deploy the proper model checking algorithms according to the strength of the SCC at hand. We also propose to use the approximate distance of a fair cycle from the initial states to guide the search. Experimental studies on a set of LTL model checking problems prove the effectiveness of our method. This work was done when the first two authors were in University of Colorado at Boulder. Parts of this work appeared in preliminary form in [38] and [39].  相似文献   

4.
5.
Winfree’s pioneering work led the foundations in the area of error-reduction in algorithmic self-assembly (Winfree and Bekbolatov in DNA Based Computers 9, LNCS, vol. 2943, pp. 126–144, [2004]), but the construction resulted in increase of the size of assembly. Reif et al. (Nanotechnol. Sci. Comput. 79–103, [2006]) contributed further in this area with compact error-resilient schemes that maintained the original size of the assemblies, but required certain restrictions on the Boolean functions to be used in the algorithmic self-assembly. It is a critical challenge to improve these compact error resilient schemes to incorporate arbitrary Boolean functions, and to determine how far these prior results can be extended under different degrees of restrictions on the Boolean functions. In this work we present a considerably more complete theory of compact error-resilient schemes for algorithmic self-assembly in two and three dimensions. In our error model, ε is defined to be the probability that there is a mismatch between the neighboring sides of two juxtaposed tiles and they still stay together in the equilibrium. This probability is independent of any other match or mismatch and hence we term this probabilistic model as the independent error model. In our model all the error analysis is performed under the assumption of kinetic equilibrium. First we consider two-dimensional algorithmic self-assembly. We present an error correction scheme for reduction of errors from ε to ε 2 for arbitrary Boolean functions in two dimensional algorithmic self-assembly. Then we characterize the class of Boolean functions for which the error can be reduced from ε to ε 3, and present an error correction scheme that achieves this reduction. Then we prove ultimate limits on certain classes of compact error resilient schemes: in particular we show that they can not provide reduction of errors from ε to ε 4 is for any Boolean functions. Further, we develop the first provable compact error resilience schemes for three dimensional tiling self-assemblies. We also extend the work of Winfree on self-healing in two-dimensional self-assembly (Winfree in Nanotechnol. Sci. Comput. 55–78, [2006]) to obtain a self-healing tile set for three-dimensional self-assembly.  相似文献   

6.
This paper introduces a new framework for solving quantified constraint satisfaction problems (QCSP) defined by universally quantified inequalities on continuous domains. This class of QCSPs has numerous applications in engineering and technology. We introduce a generic branch and prune algorithm to tackle these continuous CSPs with parametric constraints, where the pruning and the solution identification processes are dedicated to universally quantified inequalities. Special rules are proposed to handle the parameter domains of the constraints. The originality of our framework lies in the fact that it solves the QCSP as a non-quantified CSP where the quantifiers are handled locally, at the level of each constraint. Experiments show that our algorithm outperforms the state of the art methods based on constraint techniques. This paper is an extended version of a paper published at the SAC 2008 conference [15].  相似文献   

7.
We consider the variable coefficient Poisson equation with Dirichlet boundary conditions on irregular domains. We present numerical evidence for the accuracy of the solution and its gradients for different treatments at the interface using the Ghost Fluid Method for Poisson problems of Gibou et al. (J. Comput. Phys. 176:205–227, 2002; 202:577–601, 2005). This paper is therefore intended as a guide for those interested in using the GFM for Poisson-type problems (and by consequence diffusion-like problems and Stefan-type problems) by providing the pros and cons of the different choices for defining the ghost values and locating the interface. We found that in order to obtain second-order-accurate gradients, both a quadratic (or higher order) extrapolation for defining the ghost values and a quadratic (or higher order) interpolation for finding the interface location are required. In the case where the ghost values are defined by a linear extrapolation, the gradients of the solution converge slowly (at most first order in average) and the convergence rate oscillates, even when the interface location is defined by a quadratic interpolation. The same conclusions hold true for the combination of a quadratic extrapolation for the ghost cells and a linear interpolation. The solution is second-order accurate in all cases. Defining the ghost values with quadratic extrapolations leads to a non-symmetric linear system with a worse conditioning than that of the linear extrapolation case, for which the linear system is symmetric and better conditioned. We conclude that for problems where only the solution matters, the method described by Gibou, F., Fedkiw, R., Cheng, L.-T. and Kang, M. in (J. Comput. Phys. 176:205–227, 2002) is advantageous since the linear system that needs to be inverted is symmetric. In problems where the solution gradient is needed, such as in Stefan-type problems, higher order extrapolation schemes as described by Gibou, F. and Fedkiw, R. in (J. Comput. Phys. 202:577–601, 2005) are desirable.  相似文献   

8.
Constraints have played a central role in cp because they capture key substructures of a problem and efficiently exploit them to boost inference. This paper intends to do the same thing for search, proposing constraint-centered heuristics which guide the exploration of the search space toward areas that are likely to contain a high number of solutions. We first propose new search heuristics based on solution counting information at the level of individual constraints. We then describe efficient algorithms to evaluate the number of solutions of two important families of constraints: occurrence counting constraints, such as alldifferent, and sequencing constraints, such as regular. In both cases we take advantage of existing filtering algorithms to speed up the evaluation. Experimental results on benchmark problems show the effectiveness of our approach. An earlier version of this paper appeared as [24].  相似文献   

9.
Finite domain propagation solvers effectively represent the possible values of variables by a set of choices which can be naturally modelled as Boolean variables. In this paper we describe how to mimic a finite domain propagation engine, by mapping propagators into clauses in a SAT solver. This immediately results in strong nogoods for finite domain propagation. But a naive static translation is impractical except in limited cases. We show how to convert propagators to lazy clause generators for a SAT solver. The resulting system introduces flexibility in modelling since variables are modelled dually in the propagation engine and the SAT solver, and we explore various approaches to the dual modelling. We show that the resulting system solves many finite domain problems significantly faster than other techniques. This paper is an extension of results first published in [29, 30].  相似文献   

10.
We present the latest instantiation of GridSAT [1], a distributed and complete satisfiability solver that is explicitly designed to aggregate Grid resources for application performance. GridSAT was previously shown to outperform the state-of-the-art sequential solvers. In this work, we explore the unprecedented solving power GridSAT enables through algorithmic and implementation innovations. We describe the implementation techniques that allow GridSAT to leverage a variety of high-end batch-scheduled resources, clusters, interactive workstations, and personal computing resources through autonomous scheduling, checkpoint scheduling, and work migration. These innovations have allowed GridSAT to solve a set of ‘hard’ and previously unsolved industrial and community satisfiability problems. In addition to this new solution power, GridSAT also outperforms the otherwise highest performance general solvers on the annual SAT competition [21] performance benchmarks.This work was supported by grants from the National Science Foundation, numbered CAREER-0093166, EIA-9975020, ACI-0103759, and CCR-0331654 and by the San Diego Supercomputer Center and the TeraGrid project.  相似文献   

11.
This paper1 considers a single product and a single stocking location production/inventory control problem given a non-stationary stochastic demand. Under a widely-used control policy for this type of inventory system, the objective is to find the optimal number of replenishments, their timings and their respective order-up-to-levels that meet customer demands to a required service level. We extend a known CP approach for this problem using three cost-based filtering methods. Our approach can solve to optimality instances of realistic size much more efficiently than previous approaches, often with no search effort at all. This work was supported by Science Foundation Ireland under Grant No. 03/CE3/I405 as part of the Centre for Telecommunications Value-Chain-Driven Research (CTVR) and Grant No. 00/PI.1/C075. 1This paper is an extended version of [19].  相似文献   

12.
We present the first sublinear-time algorithms for computing order statistics in the Farey sequence and for the related problem of ranking. Our algorithms achieve a running times of nearly O(n 2/3), which is a significant improvement over the previous algorithms taking time O(n). We also initiate the study of a more general problem: counting primitive lattice points inside planar shapes. For rational polygons containing the origin, we obtain a running time proportional to D 6/7, where D is the diameter of the polygon. This work represents a merging of 19 and 21, with additional extensions.  相似文献   

13.
This paper presents a performance study of a one-dimensional search algorithm for solving general high-dimensional optimization problems. The proposed approach is a hybrid between a line search algorithm of Glover (The 3-2-3, stratified split and nested interval line search algorithms. Research report, OptTek Systems, Boulder, CO, 2010) and an improved variant of a global method of Gardeux et al. (Unidimensional search for solving continuous high-dimensional optimization problems. In: ISDA ’09: Proceedings of the 2009 ninth international conference on intelligent systems design and applications, IEEE Computer Society, Washington, DC, USA, pp 1096–1101, 2009) that uses line search algorithms as subroutines. The resulting algorithm, called EM323, was tested on 19 scalable benchmark functions, with a view to observing how optimization techniques for continuous optimization problems respond with increasing dimension. To this end, we report the algorithm’s performance on the 50, 100, 200, 500 and 1,000-dimension versions of each function. Computational results are given comparing our method with three leading evolutionary algorithms. Statistical analysis discloses that our method outperforms the other methods by a significant margin.  相似文献   

14.
Symmetry detection and analysis in 3D images is a fundamental task in a gamut of scientific fields such as computer vision, medical imaging and pattern recognition to name a few. In this work, we present a computational approach to 3D symmetry detection and analysis. Our analysis is conducted in the Fourier domain using the pseudo-polar Fourier transform. The pseudo-polar representation enables to efficiently and accurately analyze angular volumetric properties such as rotational symmetries. Our algorithm is based on the analysis of the angular correspondence rate of the given volume and its rotated and rotated-inverted replicas in their pseudo-polar representations. We also derive a novel rigorous analysis of the inherent constraints of 3D symmetries via groups-theory based analysis. Thus, our algorithm starts by detecting the rotational symmetry group of a given volume, and the rigorous analysis results pave the way to detect the rest of the symmetries. The complexity of the algorithm is O(N 3log (N)), where N×N×N is the volumetric size in each direction. This complexity is independent of the number of the detected symmetries. We experimentally verified our approach by applying it to synthetic as well as real 3D objects.  相似文献   

15.
We present a deterministic Logspace procedure, which, given a bipartite planar graph on n vertices, assigns O(log n) bits long weights to its edges so that the minimum weight perfect matching in the graph becomes unique. The Isolation Lemma as described in Mulmuley et al. (Combinatorica 7(1):105–131, 1987) achieves the same for general graphs using randomness, whereas we can do it deterministically when restricted to bipartite planar graphs. As a consequence, we reduce both decision and construction versions of the perfect matching problem in bipartite planar graphs to testing whether a matrix is singular, under the promise that its determinant is 0 or 1, thus obtaining a highly parallel SPL\mathsf{SPL} algorithm for both decision and construction versions of the bipartite perfect matching problem. This improves the earlier known bounds of non-uniform SPL\mathsf{SPL} by Allender et al. (J. Comput. Syst. Sci. 59(2):164–181, 1999) and NC\mathsf{NC} 2 by Miller and Naor (SIAM J. Comput. 24:1002–1017, 1995), and by Mahajan and Varadarajan (Proceedings of the Thirty-Second Annual ACM Symposium on Theory of Computing (STOC), pp. 351–357, 2000). It also rekindles the hope of obtaining a deterministic parallel algorithm for constructing a perfect matching in non-bipartite planar graphs, which has been open for a long time. Further we try to find the lower bound on the number of bits needed for deterministically isolating a perfect matching. We show that our particular method for isolation will require Ω(log n) bits. Our techniques are elementary.  相似文献   

16.
We extend CTL logic to a logic called COUNT CTL (CCTL) for specifying properties of concurrent programs with large number of processes. We present a model checking algorithm for symmetric or partially symmetric systems when their correctness specification is given in CCTL. The model-checking algorithm employs Guarded Quotient Structures introduced by Sistla and Godefroid (Lecture Notes in Comput. Sci., vol. 2102, 2001). The GQS structures can be succinct representations for the reachability graphs of partially symmetric or even asymmetric systems. Our algorithm exploits state symmetries for fast evaluation. The algorithm is top down in nature, and automatically incorporates formula decomposition and sub-formula tracking. This paper is supported in part by the NSF grants CCR-9988884, CCR-0205365.  相似文献   

17.
The error estimates of automatic integration by pure floating-point arithmetic are intrinsically embedded with uncertainty. This in critical cases can make the computation problematic. To avoid the problem, we use product rules to implement a self-validating subroutine for bivariate cubature over rectangular regions. Different from previous self-validating integrators for multiple variables (Storck in Scientific Computing with Automatic Result Verification, pp. 187–224, Academic Press, San Diego, [1993]; Wolfe in Appl. Math. Comput. 96:145–159, [1998]), which use derivatives of specific higher orders for the error estimates, we extend the ideas for univariate quadrature investigated in (Chen in Computing 78(1):81–99, [2006]) to our bivariate cubature to enable locally adaptive error estimates by full utilization of Peano kernels theorem. The mechanism for active recognition of unreachable error bounds is also set up. We demonstrate the effectiveness of our approach by comparing it with a conventional integrator.  相似文献   

18.
Borodin et al. (Algorithmica 37(4):295–326, 2003) gave a model of greedy-like algorithms for scheduling problems and Angelopoulos and Borodin (Algorithmica 40(4):271–291, 2004) extended their work to facility location and set cover problems. We generalize their model to include other optimization problems, and apply the generalized framework to graph problems. Our goal is to define an abstract model that captures the intrinsic power and limitations of greedy algorithms for various graph optimization problems, as Borodin et al. (Algorithmica 37(4):295–326, 2003) did for scheduling. We prove bounds on the approximation ratio achievable by such algorithms for basic graph problems such as shortest path, weighted vertex cover, Steiner tree, and independent set. For example, we show that, for the shortest path problem, no algorithm in the FIXED priority model can achieve any approximation ratio (even one dependent on the graph size), but the well-known Dijkstra’s algorithm is an optimal ADAPTIVE priority algorithm. We also prove that the approximation ratio for weighted vertex cover achievable by ADAPTIVE priority algorithms is exactly 2. Here, a new lower bound matches the known upper bounds (Johnson in J. Comput. Syst. Sci. 9(3):256–278, 1974). We give a number of other lower bounds for priority algorithms, as well as a new approximation algorithm for minimum Steiner tree problem with weights in the interval [1,2]. S. Davis’ research supported by NSF grants CCR-0098197, CCR-0313241, and CCR-0515332. Views expressed are not endorsed by the NSF. R. Impagliazzo’s research supported by NSF grant CCR-0098197, CCR-0313241, and CCR-0515332. Views expressed are not endorsed by the NSF. Some work done while at the Institute for Advanced Study, supported by the State of New Jersey.  相似文献   

19.
This paper considers a family of spatially discrete approximations, including boundary treatment, to initial boundary value problems in evolving bounded domains. The presented method is based on the Cartesian grid embedded Finite-Difference method, which was initially introduced by Abarbanel and Ditkowski (ICASE Report No. 96-8, 1996; and J. Comput. Phys. 133(2), 1997) and Ditkowski (Ph.D. thesis, Tel Aviv University, 1997), for initial boundary value problems on constant irregular domains. We perform a comprehensive theoretical analysis of the numerical issues, which arise when dealing with domains, whose boundaries evolve smoothly in the spatial domain as a function of time. In this class of problems the moving boundaries are impenetrable with either Dirichlet or Neumann boundary conditions, and should not be confused with the class of moving interface problems such as multiple phase flow, solidification, and the Stefan problem. Unlike other similar works on this class of problems, the resulting method is not restricted to domains of up to 3-D, can achieve higher than 2nd-order accuracy both in time and space, and is strictly stable in semi-discrete settings. The strict stability property of the method also implies, that the numerical solution remains consistent and valid for a long integration time. A complete convergence analysis is carried in semi-discrete settings, including a detailed analysis for the implementation of the diffusion equation. Numerical solutions of the diffusion equation, using the method for a 2nd and a 4th-order of accuracy are carried out in one dimension and two dimensions respectively, which demonstrates the efficacy of the method. This research was supported by the Israel Science Foundation (grant No. 1362/04).  相似文献   

20.
We consider initial value problems for semilinear parabolic equations, which possess a dispersive term, nonlocal in general. This dispersive term is not necessarily dominated by the dissipative term. In our numerical schemes, the time discretization is done by linearly implicit schemes. More specifically, we discretize the initial value problem by the implicit–explicit Euler scheme and by the two-step implicit–explicit BDF scheme. In this work, we extend the results in Akrivis et al. (Math. Comput. 67:457–477, 1998; Numer. Math. 82:521–541, 1999), where the dispersive term (if present) was dominated by the dissipative one and was integrated explicitly. We also derive optimal order error estimates. We provide various physically relevant applications of dispersive–dissipative equations and systems fitting in our abstract framework.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号