首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 626 毫秒
1.
Due to significant advances in SAT technology in the last years, its use for solving constraint satisfaction problems has been gaining wide acceptance. Solvers for satisfiability modulo theories (SMT) generalize SAT solving by adding the ability to handle arithmetic and other theories. Although there are results pointing out the adequacy of SMT solvers for solving CSPs, there are no available tools to extensively explore such adequacy. For this reason, in this paper we introduce a tool for translating FLATZINC (MINIZINC intermediate code) instances of CSPs to the standard SMT-LIB language. We provide extensive performance comparisons between state-of-the-art SMT solvers and most of the available FLATZINC solvers on standard FLATZINC problems. The obtained results suggest that state-of-the-art SMT solvers can be effectively used to solve CSPs.  相似文献   

2.
Table constraints are important in constraint programming as they are present in many real problems from areas such as configuration and databases. As a result, numerous specialized algorithms that achieve generalized arc consistency (GAC) on table constraints have been proposed. Since these algorithms achieve GAC, they operate on one constraint at a time. In this paper we propose new filtering algorithms for positive table constraints that achieve stronger local consistency properties than GAC by exploiting intersections between constraints. The first algorithm, called maxRPWC+, is a domain filtering algorithm that is based on the local consistency maxRPWC and extends the GAC algorithm of Lecoutre and Szymanek (2006). The second algorithm extends the state-of-the-art STR-based algorithms to stronger relation filtering consistencies, i.e., consistencies that can remove tuples from constraints’ relations. Experimental results from benchmark problems demonstrate that the proposed algorithms are quite competitive with standard GAC algorithms like STR2 in some classes of problems with intersecting table constraints, being orders of magnitude faster in some cases.  相似文献   

3.
We revisit the analysis of the classical QuickSelect algorithm. Usually, the analysis deals with the mean number of key comparisons, but here we view keys as words produced by a source, and words are compared via their symbols in lexicographic order. Our probabilistic models belong to a broad category of information sources that encompasses memoryless (i.e., independent-symbols) and Markov sources, as well as many unbounded-correlation sources. The “realistic” cost of the algorithm is here the total number of symbol comparisons performed by the algorithm, and, in this context, the average-case analysis aims to provide estimates for the mean number of symbol comparisons. For the QuickSort algorithm, known average-case complexity results are of \({\Theta } (n \log n)\) in the case of key comparisons, and \({\Theta }(n\log ^{2} n)\) for symbol comparisons. For QuickSelect algorithms, and with respect to key comparisons, the average-case complexity is Θ(n). In this present article, we prove that, with respect to symbol comparisons, QuickSelect’s average-case complexity remains Θ(n). In each case, we provide explicit expressions for the dominant constants, closely related to the probabilistic behaviour of the source.  相似文献   

4.
Satisfiability Modulo Theories (SMT) have been widely investigated over the last decade. Recently researchers have extended SMT to the optimization problem over linear arithmetic constraints. To the best of our knowledge, Symba and OPT-MathSAT are two most efficient solvers available for this problem. The key algorithms used by Symba and OPT-MathSAT consist of the loop of two procedures: 1) critical finding for detecting a critical point, which is very likely to be globally optimal, and 2) global checking for confirming the critical point is really globally optimal. In this paper, we propose a new approach based on the Simplex method widely used in operation research. Our fundamental idea is to find several critical points by constructing and solving a series of linear problems with the Simplex method. Our approach replaces the algorithms of critical finding in Symba and OPT-MathSAT, and reduces the runtime of critical finding and decreases the number of executions of global checking. The correctness of our approach is proved. The experiment evaluates our implementation against Symba and OPT-MathSAT on a critical class of problems in real-time systems. Our approach outperforms Symba on 99.6% of benchmarks and is superior to OPT-MathSAT in large-scale cases where the number of tasks is more than 24. The experimental results demonstrate that our approach has great potential and competitiveness for the optimization problem.  相似文献   

5.
FEMPAR is an open source object oriented Fortran200X scientific software library for the high-performance scalable simulation of complex multiphysics problems governed by partial differential equations at large scales, by exploiting state-of-the-art supercomputing resources. It is a highly modularized, flexible, and extensible library, that provides a set of modules that can be combined to carry out the different steps of the simulation pipeline. FEMPAR includes a rich set of algorithms for the discretization step, namely (arbitrary-order) grad, div, and curl-conforming finite element methods, discontinuous Galerkin methods, B-splines, and unfitted finite element techniques on cut cells, combined with h-adaptivity. The linear solver module relies on state-of-the-art bulk-asynchronous implementations of multilevel domain decomposition solvers for the different discretization alternatives and block-preconditioning techniques for multiphysics problems. FEMPAR is a framework that provides users with out-of-the-box state-of-the-art discretization techniques and highly scalable solvers for the simulation of complex applications, hiding the dramatic complexity of the underlying algorithms. But it is also a framework for researchers that want to experience with new algorithms and solvers, by providing a highly extensible framework. In this work, the first one in a series of articles about FEMPAR, we provide a detailed introduction to the software abstractions used in the discretization module and the related geometrical module. We also provide some ingredients about the assembly of linear systems arising from finite element discretizations, but the software design of complex scalable multilevel solvers is postponed to a subsequent work.  相似文献   

6.
We present a bit-precise decision procedure for the theory of floating-point arithmetic. The core of our approach is a non-trivial, lattice-theoretic generalisation of the conflict-driven clause learning algorithm in modern sat solvers to lattice-based abstractions. We use floating-point intervals to reason about the ranges of variables, which allows us to directly handle arithmetic and is more efficient than encoding a formula as a bit-vector as in current floating-point solvers. Interval reasoning alone is incomplete, and we obtain completeness by developing a conflict analysis algorithm that reasons natively about intervals. We have implemented this method in the mathsat5 smt solver and evaluated it on assertion checking problems that bound the values of program variables. Our new technique is faster than a bit-vector encoding approach on 80 % of the benchmarks, and is faster by one order of magnitude or more on 60 % of the benchmarks. The generalisation of cdcl we propose is widely applicable and can be used to derive abstraction-based smt solvers for other theories.  相似文献   

7.
Producing and checking proofs from SMT solvers is currently the most feasible method for achieving high confidence in the correctness of solver results. The diversity of solvers and relative complexity of SMT over, say, SAT means that flexibility, as well as performance, is a critical characteristic of a proof-checking solution for SMT. This paper describes such a solution, based on a Logical Framework with Side Conditions (LFSC). We describe the framework and show how it can be applied for flexible proof production and checking for two different SMT solvers, clsat and cvc3. We also report empirical results showing good performance relative to solver execution time.  相似文献   

8.
The AtMostSeqCard constraint is the conjunction of a cardinality constraint on a sequence of n variables and of n???q?+?1 constraints AtMost u on each subsequence of size q. This constraint is useful in car-sequencing and crew-rostering problems. In van Hoeve et al. (Constraints 14(2):273–292, 2009), two algorithms designed for the AmongSeq constraint were adapted to this constraint with an O(2 q n) and O(n 3) worst case time complexity, respectively. In Maher et al. (2008), another algorithm similarly adaptable to filter the AtMostSeqCard constraint with a time complexity of O(n 2) was proposed. In this paper, we introduce an algorithm for achieving arc consistency on the AtMostSeqCard constraint with an O(n) (hence optimal) worst case time complexity. Next, we show that this algorithm can be easily modified to achieve arc consistency on some extensions of this constraint. In particular, the conjunction of a set of m AtMostSeqCard constraints sharing the same scope can be filtered in O(nm). We then empirically study the efficiency of our propagator on instances of the car-sequencing and crew-rostering problems.  相似文献   

9.
We present new filtering algorithms for Disjunctive and Cumulative constraints, each of which improves the complexity of the state-of-the-art algorithms by a factor of log n. We show how to perform Time-Tabling and Detectable Precedences in linear time on the Disjunctive constraint. Furthermore, we present a linear-time Overload Checking for the Disjunctive and Cumulative constraints. Finally, we show how the rule of Not-first/Not-last can be enforced in quadratic time for the Cumulative constraint. These algorithms rely on the union find data structure, from which we take advantage to introduce a new data structure that we call it time line. This data structure provides constant time operations that were previously implemented in logarithmic time by the Θ-tree data structure. Experiments show that these new algorithms are competitive even for a small number of tasks and outperform existing algorithms as the number of tasks increases. We also show that the time line can be used to solve specific scheduling problems.  相似文献   

10.
We introduce a knowledge representation language ${\cal AC(C)}$ extending the syntax and semantics of ASP and CR-Prolog, give some examples of its use, and present an algorithm, $\mathcal{AC}\!solver$ , for computing answer sets of ${\cal AC(C)}$ programs. The algorithm does not require full grounding of a program and combines “classical” ASP solving methods with constraint logic programming techniques and CR-Prolog based abduction. The ${\cal AC(C)}$ based approach often allows to solve problems which are impossible to solve by more traditional ASP solving techniques. We believe that further investigation of the language and development of more efficient and reliable solvers for its programs can help to substantially expand the domain of applicability of the answer set programming paradigm.  相似文献   

11.
Spatial reasoning with rectangular cardinal relations   总被引:1,自引:0,他引:1  
Qualitative spatial representation and reasoning plays a important role in various spatial applications. In this paper we introduce a new formalism, we name RCD calculus, for qualitative spatial reasoning with cardinal direction relations between regions of the plane approximated by rectangles. We believe this calculus leads to an attractive balance between efficiency, simplicity and expressive power, which makes it adequate for spatial applications. We define a constraint algebra and we identify a convex tractable subalgebra allowing efficient reasoning with definite and imprecise knowledge about spatial configurations specified by qualitative constraint networks. For such tractable fragment, we propose several polynomial algorithms based on constraint satisfaction to solve the consistency and minimality problems. Some of them rely on a translation of qualitative networks of the RCD calculus to qualitative networks of the Interval or Rectangle Algebra, and back. We show that the consistency problem for convex networks can also be solved inside the RCD calculus, by applying a suitable adaptation of the path-consistency algorithm. However, path consistency can not be applied to obtain the minimal network, contrary to what happens in the convex fragment of the Rectangle Algebra. Finally, we partially analyze the complexity of the consistency problem when adding non-convex relations, showing that it becomes NP-complete in the cases considered. This analysis may contribute to find a maximal tractable subclass of the RCD calculus and of the Rectangle Algebra, which remains an open problem.  相似文献   

12.
An instance of the maximum constraint satisfaction problem (Max CSP) is a finite collection of constraints on a set of variables, and the goal is to assign values to the variables that maximises the number of satisfied constraints. Max CSP captures many well-known problems (such as Maxk-SAT and Max Cut) and is consequently NP-hard. Thus, it is natural to study how restrictions on the allowed constraint types (or constraint language) affect the complexity and approximability of Max CSP. The PCP theorem is equivalent to the existence of a constraint language for which Max CSP has a hard gap at location 1; i.e. it is NP-hard to distinguish between satisfiable instances and instances where at most some constant fraction of the constraints are satisfiable. All constraint languages, for which the CSP problem (i.e., the problem of deciding whether all constraints can be satisfied) is currently known to be NP-hard, have a certain algebraic property. We prove that any constraint language with this algebraic property makes Max CSP have a hard gap at location 1 which, in particular, implies that such problems cannot have a PTAS unless P=NP. We then apply this result to Max CSP restricted to a single constraint type; this class of problems contains, for instance, Max Cut and Max DiCut. Assuming PNP, we show that such problems do not admit PTAS except in some trivial cases. Our results hold even if the number of occurrences of each variable is bounded by a constant. Finally, we give some applications of our results.  相似文献   

13.
Source code terms such as method names and variable types are often different from conceptual words mentioned in a search query. This vocabulary mismatch problem can make code search inefficient. In this paper, we present COde voCABUlary (CoCaBu), an approach to resolving the vocabulary mismatch problem when dealing with free-form code search queries. Our approach leverages common developer questions and the associated expert answers to augment user queries with the relevant, but missing, structural code entities in order to improve the performance of matching relevant code examples within large code repositories. To instantiate this approach, we build GitSearch, a code search engine, on top of GitHub and Stack Overflow Q&A data. We evaluate GitSearch in several dimensions to demonstrate that (1) its code search results are correct with respect to user-accepted answers; (2) the results are qualitatively better than those of existing Internet-scale code search engines; (3) our engine is competitive against web search engines, such as Google, in helping users solve programming tasks; and (4) GitSearch provides code examples that are acceptable or interesting to the community as answers for Stack Overflow questions.  相似文献   

14.

ProB provides a constraint solver for the B-method written in Prolog and can make use of different backends based on SAT and SMT solving. One such backend translates B and Event-B operators to SMT-LIB using the Z3 solver. This translation uses quantifiers to axiomatize some operators, which are not well-handled by Z3. Several relational constraints such as the transitive closure are not supported by this translation. In this article, we substantially improve the translation to SMT-LIB by employing a more constructive rather than axiomatized style using Z3’s lambda function. Thereby, we are able both to translate more B and Event-B operators to SMT-LIB and improve the overall performance. We further extend ProB’s interface to Z3 to run different solver configurations in parallel. In addition, we present a direct implementation of SMT solving in Prolog using ProB’s constraint solver as a theory solver. We hereby aim to combine the strengths of conflict-driven clause learning for identifying contradictions with ProB’s constraint solver for finding solutions. We deem this implementation to be worthwhile since ProB’s constraint solver is tailored toward solving B and Event-B constraints, and we herewith avoid the dependency on an external SMT solver. Empirical results show that the new integration of Z3 has improved performance of constraint solving and enables to solve several constraints which cannot be solved by ProB’s constraint solver. Furthermore, the direct implementation of SMT solving in ProB shows benefits compared to ProB’s constraint solver and the integration of Z3.

  相似文献   

15.
This paper extends Common2, the family of objects that implement and are wait-free implementable from 2 consensus objects, in two ways: First, the stack object is shown to be in the family, refuting a conjecture to the contrary [6]. Second, Common2 is investigated in the unbounded concurrency model, whereas until now it was considered only in an n-process model. We show that the fetch-and-add, test-and-set , and stack objects are in Common2 even with respect to this stronger notion of wait-free implementation. Our constructions rely on a wait-free implementation of immediate snapshots in the unbounded concurrency model, which was previously not known to be possible. The introduction of unbounded concurrency to the study of Common2 opens several directions of research: are there objects that have n-process implementations but are not unbounded concurrency implementable? We conjecture that swap is such an object. Additionally, the hope is that a queue impossibility proof, which eludes us in the n-process model, will be easier to establish in the unbounded concurrency model.  相似文献   

16.
The FOCUS constraint expresses the notion that solutions are concentrated. In practice, this constraint suffers from the rigidity of its semantics. To tackle this issue, we propose three generalizations of the FOCUS constraint. We provide for each one a complete filtering algorithm. Moreover, we propose ILP and CSP decompositions.  相似文献   

17.
In this paper, we present Para Miner which is a generic and parallel algorithm for closed pattern mining. Para Miner is built on the principles of pattern enumeration in strongly accessible set systems. Its efficiency is due to a novel dataset reduction technique (that we call EL-reduction), combined with novel technique for performing dataset reduction in a parallel execution on a multi-core architecture. We illustrate Para Miner’s genericity by using this algorithm to solve three different pattern mining problems: the frequent itemset mining problem, the mining frequent connected relational graphs problem and the mining gradual itemsets problem. In this paper, we prove the soundness and the completeness of Para Miner. Furthermore, our experiments show that despite being a generic algorithm, Para Miner can compete with specialized state of the art algorithms designed for the pattern mining problems mentioned above. Besides, for the particular problem of gradual itemset mining, Para Miner outperforms the state of the art algorithm by two orders of magnitude.  相似文献   

18.
We propose a novel distributed algorithm for the minimum cut problem. Motivated by applications like volumetric segmentation in computer vision, we aim at solving large sparse problems. When the problem does not fully fit in the memory, we need to either process it by parts, looking at one part at a time, or distribute across several computers. Many mincut/maxflow algorithms are designed for the shared memory architecture and do not scale to this setting. We consider algorithms that work on disjoint regions of the problem and exchange messages between the regions. We show that the region push-relabel algorithm of Delong and Boykov (A scalable graph-cut algorithm for N-D grids, in CVPR, 2008) uses Θ(n 2) rounds of message exchange, where n is the number of vertices. Our new algorithm performs path augmentations inside the regions and push-relabel style updates between the regions. It uses asymptotically less message exchanges, $O(\mathcal{B}^{2})$ , where $\mathcal{B}$ is the set of boundary vertices. The sequential and parallel versions of our algorithm are competitive with the state-of-the-art in the shared memory model. By achieving a lower amount of message exchanges (even asymptotically lower in our synthetic experiments), they suit better for solving large problems using a disk storage or a distributed system.  相似文献   

19.
Numerous problems in Theoretical Computer Science can be solved very efficiently using powerful algebraic constructions. Computing shortest paths, constructing expanders, and proving the PCP Theorem, are just few examples of this phenomenon. The quest for combinatorial algorithms that do not use heavy algebraic machinery, but are roughly as efficient, has become a central field of study in this area. Combinatorial algorithms are often simpler than their algebraic counterparts. Moreover, in many cases, combinatorial algorithms and proofs provide additional understanding of studied problems. In this paper we initiate the study of combinatorial algorithms for Distributed Graph Coloring problems. In a distributed setting a communication network is modeled by a graph $G=(V,E)$ of maximum degree $\varDelta $ . The vertices of $G$ host the processors, and communication is performed over the edges of $G$ . The goal of distributed vertex coloring is to color $V$ with $(\varDelta + 1)$ colors such that any two neighbors are colored with distinct colors. Currently, efficient algorithms for vertex coloring that require $O(\varDelta + \log ^* n)$ time are based on the algebraic algorithm of Linial (SIAM J Comput 21(1):193–201, 1992) that employs set-systems. The best currently-known combinatorial set-system free algorithm, due to Goldberg et al. (SIAM J Discret Math 1(4):434–446, 1988), requires $O(\varDelta ^2+\log ^*n)$ time. We significantly improve over this by devising a combinatorial $(\varDelta + 1)$ -coloring algorithm that runs in $O(\varDelta + \log ^* n)$ time. This exactly matches the running time of the best-known algebraic algorithm. In addition, we devise a tradeoff for computing $O(\varDelta \cdot t)$ -coloring in $O(\varDelta /t + \log ^* n)$ time, for almost the entire range $1 < t < \varDelta $ . We also compute a Maximal Independent Set in $O(\varDelta + \log ^* n)$ time on general graphs, and in $O(\log n/ \log \log n)$ time on graphs of bounded arboricity. Prior to our work, these results could be only achieved using algebraic techniques. We believe that our algorithms are more suitable for real-life networks with limited resources, such as sensor networks.  相似文献   

20.
Numerous sophisticated local algorithm were suggested in the literature for various fundamental problems. Notable examples are the MIS and $(\Delta +1)$ -coloring algorithms by Barenboim and Elkin (Distrib Comput 22(5–6):363–379, 2010), by Kuhn (2009), and by Panconesi and Srinivasan (J Algorithms 20(2):356–374, 1996), as well as the $O\mathopen {}(\Delta ^2)$ -coloring algorithm by Linial (J Comput 21:193, 1992). Unfortunately, most known local algorithms (including, in particular, the aforementioned algorithms) are non-uniform, that is, local algorithms generally use good estimations of one or more global parameters of the network, e.g., the maximum degree $\Delta $ or the number of nodes $n$ . This paper provides a method for transforming a non-uniform local algorithm into a uniform one. Furthermore, the resulting algorithm enjoys the same asymptotic running time as the original non-uniform algorithm. Our method applies to a wide family of both deterministic and randomized algorithms. Specifically, it applies to almost all state of the art non-uniform algorithms for MIS and Maximal Matching, as well as to many results concerning the coloring problem (In particular, it applies to all aforementioned algorithms). To obtain our transformations we introduce a new distributed tool called pruning algorithms, which we believe may be of independent interest.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号