首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The Nelson–Oppen combination method combines decision procedures for first-order theories over disjoint signatures into a single decision procedure for the union theory. In order to be correct, the method requires that the component theories be stably infinite. This restriction makes the method inapplicable to many interesting theories such as, for instance, theories having only finite models. In this paper, we describe two extensions of the Nelson–Oppen method that address the problem of combining theories that are not stably infinite. In our extensions, the component decision procedures exchange not only equalities between shared variables but also certain cardinality constraints. Applications of our results include the combination of theories having only finite models, as well as the combination of nonstably infinite theories with the theory of equality, the theories of total and partial orders, and the theory of lattices with maximum and minimum. Calogero G. Zarba: Work done by this author at Stanford University and later at LORIA and INRIA-Lorraine.  相似文献   

2.
Decision procedures are key components of theorem provers and constraint satisfaction systems. Their modular combination is of prime interest for building efficient systems, but their effective use is often limited by poor interface capabilities, when such procedures only provide a simple “sat/unsat” answer. In this paper, we develop a framework to design cooperation schemas between such procedures while maintaining modularity of their interfaces. First, we use the framework to specify and prove the correctness of classic combination schemas by Nelson–Oppen and Shostak. Second, we introduce the concept of deduction complete satisfiability procedures, we show how to build them for large classes of theories, then we provide a schema to modularly combine them. Third, we consider the problem of modularly constructing explanations for combinations by re-using available proof-producing procedures for the component theories.  相似文献   

3.
We extend the Nelson–Oppen combination procedure to the case of theories that are compatible with respect to a common subtheory in the shared signature. The notion of compatibility relies on model completions and related concepts from classical model theory.  相似文献   

4.
This paper offers a theoretical study of constraint simplification, a fundamental issue for the designer of a practical type inference system with subtyping. In the simpler case where constraints are equations, a simple isomorphism between constrained type schemes and finite state automata yields a complete constraint simplification method. Using it as a guide for the intuition, we move on to the case of subtyping, and describe several simplification algorithms. Although no longer complete, they are conceptually simple, efficient, and very effective in practice. Overall, this paper gives a concise theoretical account of the techniques found at the core of our type inference system. Our study is restricted to the case where constraints are interpreted in a non-structural lattice of regular terms. Nevertheless, we highlight a small number of general ideas, which explain our algorithms at a high level and may be applicable to a variety of other systems.  相似文献   

5.
In this paper we present a new inductive inference algorithm for a class of logic programs, calledlinear monadic logic programs. It has several unique features not found in Shapiro’s Model Inference System. It has been proved that a set of trees isrational if and only if it is computed by a linear monadic logic program, and that the rational set of trees is recognized by a tree automaton. Based on these facts, we can reduce the problem of inductive inference of linear monadic logic programs to the problem of inductive inference of tree automata. Further several efficient inference algorithms for finite automata have been developed. We extend them to an inference algorithm for tree automata and use it to get an efficient inductive inference algorithm for linear monadic logic programs. The correctness, time complexity and several comparisons of our algorithm with Model Inference System are shown.  相似文献   

6.
State-of-the-art algorithms for on-the-fly automata-theoretic LTL model checking make use of nested depth-first search to look for accepting cycles in the product of the system and the Büchi automaton. Here, we present two new single depth-first search algorithms that accomplish the same task. The first is based on Tarjan's algorithm for detecting strongly connected components, while the second is a combination of the first and Couvreur's algorithm for finding acceptance cycles in the product of a system and a generalized Büchi automaton. Both new algorithms report an accepting cycle immediately after all transitions in the cycle have been investigated. We show their correctness, describe efficient implementations and discuss how they interact with some other model checking techniques, such as bitstate hashing. The algorithms are compared to the nested search algorithms in experiments on both random and actual state spaces, using random and real formulas. Our measurements indicate that our algorithms investigate at most as many states as the old ones. In the case of a violation of the correctness property, the algorithms often explore significantly fewer states.  相似文献   

7.
We describe a combination of BDDs and superposition theorem proving, called light-weight theorem proving, and its application to the flexible and efficient automation of the reasoning activity required to debug and verify pointer manipulating programs. This class of programs is notoriously challenging to reason about and it is also interesting from a programming point of view since pointers are an important source of bugs. The implementation of our technique (in a system called haRVey) scales up significantly better than state-of-the-art tools such as E (a superposition prover) and Simplify (a prover based on the Nelson and Oppen combination schema of decision procedures which is used in ESC/Java) on a set of proof obligations arising in debugging and verifying C functions manipulating pointers.  相似文献   

8.
Automated deduction methods should be specified not procedurally, but declaratively, as inference systems which are proved correct regardless of implementation details. Then, different algorithms to implement a given inference system should be specified as strategies to apply the inference rules. The inference rules themselves can be naturally specified as (possibly conditional) rewrite rules. Using a high-performance rewriting language implementation and a strategy language to guide rewriting computations, we can obtain in a modular way implementations of both the inference rules of automated deduction procedures and of algorithms controling their application. This paper presents the design of a strategy language for the Maude rewriting language that supports this modular decomposition: inference systems are specified in system modules, and strategies in strategy modules. We give a set-theoretic semantics for this strategy language, present its different combinators, illustrate its main ideas with several examples, and describe both a reflective prototype in Maude and an ongoing C++ implementation.  相似文献   

9.
In this paper, we present two new parallel algorithms to solve large instances of the scheduling of independent tasks problem. First, we describe a parallel version of the Min–min heuristic. Second, we present GraphCell, an advanced parallel cellular genetic algorithm (CGA) for the GPU. Two new generic recombination operators that take advantage of the massive parallelism of the GPU are proposed for GraphCell. A speedup study shows the high performance of the parallel Min–min algorithm in the GPU versus several CPU versions of the algorithm (both sequential and parallel using multiple threads). GraphCell improves state-of-the-art solutions, especially for larger problems, and it provides an alternative to our GPU Min–min heuristic when more accurate solutions are needed, at the expense of an increased runtime.  相似文献   

10.
2LS is a decidable many-sorted set-theoretic language involving one sort for elements and one sort for sets of elements. In this paper we extend 2LS with constructs for expressing monotonicity, additivity, and multiplicativity properties of set-to-set functions. We call the resulting language 2LSmf. We prove that 2LSmf is decidable by reducing the problem of determining the satisfiability of its sentences to the problem of determining the satisfiability of sentences of 2LS. Furthermore, we prove that the language 2LSmf is stably infinite with respect to the sort of elements. Therefore, by using a many-sorted version of the Nelson–Oppen combination method, 2LSmf can be combined with other languages modeling the sort of elements.  相似文献   

11.
The continuous evolution of smart devices has led to serious limitations in multimedia applications. The multimedia graphic design and animation and the increased use of rich and complex multimedia content on the web or other media have all created a need to diversify the accessibility of the content. Therefore several techniques are used today to design a universally accessible content. The main techniques that are still used to maintain accessibility is to create two parallel streams of design and development of the same content. Thus, the continuous evolution will certainly lead to create other streams. For this, the automatic reasoning on multimedia to allow a computer to modify the design according to different variables, devices capabilities, user status and context to provide personalized adapted content. In this paper, we propose an abstract document model called XMS short for XML Multimedia Specification; it describes the composition of an original multimedia document and can be extended to any document type. We present how we may use spatial information present in this document to adapt and modify the original document. We mainly focus on the spatial aspect of a web document, a combination of RCC8, qualitative distances, and directions are used to describe the layout of a set of objects. The level of granularity of the definition of the objects defines the level of details that will be processed by our PROLOG based inference system, simplified versions of algorithms from the inference system and how it works on the spatial dimension of the document are shown. In the end samples of how spatial relations manipulation algorithms work are illustrated.  相似文献   

12.
Considers a particular class of algorithms which present certain difficulties to formal verification. These are algorithms which use a single data structure for two or more purposes, which combine program control information with other data structures or which are developed as a combination of a basic idea with an implementation technique. Our approach is based on applying proven semantics-preserving transformation rules in a wide spectrum language. Starting with a set theoretical specification of “reachability”, we are able to derive iterative and recursive graph marking algorithms using the “pointer switching” idea of Schorr and Waite (1967). There have been several proofs of correctness of the Schorr-Waite algorithm, and a small number of transformational developments of the algorithm. The great advantage of our approach is that we can derive the algorithm from its specification using only general-purpose transformational rules, without the need for complicated induction arguments. Our approach applies equally well to several more complex algorithms which make use of the pointer switching strategy, including a hybrid algorithm which uses a fixed length stack, switching to the pointer switching strategy when the stack runs out  相似文献   

13.
Branch-and-Combine (BaC) clock distribution has recently been introduced. The most interesting aspect of the new scheme is its ability to bound skew by a constant irrespective of network size. In this paper, we introduce algorithms for systematic synthesis of BaC networks for clocking meshes, tori, and hypercubes of different dimensionalities. For meshes our approach relies on filing techniques. We start with the identification of basic proper tiles satisfying certain criteria. We define a set of valid transformations on tiles. By appropriately applying a sequence of transformations on a basic proper tile, one could synthesize a valid BaC network. We formally introduce methods and procedures for applying the above steps to systematically construct different valid BaC network designs for 2D and 3D meshes. To construct BaC networks for clocking hypercubes of any dimensionality we describe a formal methodology. In this case, we utilize an approach called replication which is based on constructing larger hypercube clocking networks from smaller ones. We combine the techniques for 2D, 3D meshes with replication techniques to formulate a methodology applicable to meshes and tori of dimensionality greater than three. We provide proofs of correctness for the algorithms we introduce. Besides, we formally define an optimality criterion based on link costs which is utilized to check the optimality of the synthesized network designs. In the case of meshes, we show that the majority of synthesized networks are optimal with respect to our defined criterion. For those suboptimal networks, we describe a procedure for identifying and removing unnecessary (redundant) links. The procedure is guaranteed to optimize the network without changing its behavioral parameters  相似文献   

14.
15.
We present new primal–dual algorithms for several network design problems. The problems considered are the generalized Steiner tree problem (GST), the directed Steiner tree problem (DST), and the set cover problem (SC) which is a subcase of DST. All our problems are NP-hard; so we are interested in their approximation algorithms. First, we give an algorithm for DST which is based on the traditional approach of designing primal–dual approximation algorithms. We show that the approximation factor of the algorithm is k, where k is the number of terminals, in the case when the problem is restricted to quasi-bipartite graphs. We also give pathologically bad examples for the algorithm performance. To overcome the problems exposed by the bad examples, we design a new framework for primal–dual algorithms which can be applied to all of our problems. The main feature of the new approach is that, unlike the traditional primal–dual algorithms, it keeps the dual solution in the interior of the dual feasible region. The new approach allows us to avoid including too many arcs in the solution, and thus achieves a smaller-cost solution. Our computational results show that the interior-point version of the primal–dual most of the time performs better than the original primal–dual method.  相似文献   

16.
This paper proposes a new anytime possibilistic inference algorithm for min-based directed networks. Our algorithm departs from a direct adaptation of probabilistic propagation algorithms since it avoids the transformation of the initial network into a junction tree which is known to be a hard problem. The proposed algorithm is composed of several, local stabilization, procedures. Stabilization procedures aim to guarantee that local distributions defined on each node are coherent with respect to those of its parents. We provide experimental results which, for instance, compare our algorithm with the ones based on a direct adaptation of probabilistic propagation algorithms.  相似文献   

17.
Constraints have played a central role in cp because they capture key substructures of a problem and efficiently exploit them to boost inference. This paper intends to do the same thing for search, proposing constraint-centered heuristics which guide the exploration of the search space toward areas that are likely to contain a high number of solutions. We first propose new search heuristics based on solution counting information at the level of individual constraints. We then describe efficient algorithms to evaluate the number of solutions of two important families of constraints: occurrence counting constraints, such as alldifferent, and sequencing constraints, such as regular. In both cases we take advantage of existing filtering algorithms to speed up the evaluation. Experimental results on benchmark problems show the effectiveness of our approach. An earlier version of this paper appeared as [24].  相似文献   

18.
Waldmeister is a high-performance theorem prover for unit equational first-order logic. In the making of Waldmeister, we have applied an engineering approach, identifying the critical points with respect to efficiency in time and space. Our logical three-level system model consists of the basic operations on the lowest level, where we put great stress on efficient data structures and algorithms. For the middle level, where the inference steps are aggregated into an inference machine, flexible adjustment has proven essential during experimental evaluation. The top level holds control strategy and reduction ordering. Although at this level only standard strategies are employed, really large proof tasks have been managed in reasonable time.  相似文献   

19.
Geosensor networks present unique resource constraints to spatial computation, including limited battery power, communication constraints, and frequently a lack of coordinate positioning systems. As a result, there is a need for new algorithms that can efficiently satisfy basic spatial queries within those resource constraints. This paper explores the design and evaluation of a family of new algorithms for determining the topological relations between regions monitored by such a resource-constrained geosensor network. The algorithms are based on efficient, decentralized (in-network) variants of conventional 4-intersection and intersection and difference models, with in-network data aggregation. Further, our algorithms operate without any coordinate information, making them suitable applications where a positioning system is unavailable or unreliable. While all four algorithms are shown to have overall communication complexity O(n) and optimal load balance O(1), the algorithms differ in the level of topological detail they can detect; the types of regions they can monitor; and in the constant factors for communication complexity. The paper also demonstrates the impact of finite granularity observations on the correctness of the query results. In the conclusions, we identify the need to conduct further fundamental research on the relationship between topological relations between regions and limited granularity sensor observations of those regions.  相似文献   

20.
In this paper we study the problem of designing and specifying standard program components applicable to a wide variety of tasks; we choose for this study the specific problem domain of data structures for general searching problems. Within this domain Bentley and Saxe [1] have developed transformations for converting solutions of simple searching problems to solutions of more complex problems. We discuss one of those transformations, specify precisely the transformation and its conditions of applicability, and prove its correctness; we accomplish this by casting it in terms of abstract data types–specifically by using the Alphard form mechanism. The costs of the structures derived by this transformation are only slightly greater than the costs of the original structures, and the correctness of the transformation definition together with the correctness of the original structure assure the correctness of the derived structure. The transformation we describe has already been used to develop a number of new algorithms, and it represents a new level of generality in software engineering tools.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号