首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
We present a case study on the formal development of a non trivial (meta)theory in the Theory of Contexts using the Coq proof assistant. The methodology underlying the Theory of Contexts for reasoning on systems presented in HOAS is based on an axiomatic syntactic standpoint. We feel that one of the main advantages of this approach, is that it requires a very low logical overhead.The object system we focus on is the lazy, call-by-name λ-calculus (λcbn), both untyped and simply typed. We will see that the formal, fully detailed development of the theory of (λcbn) in the Theory of Contexts introduces a small, sustainable overhead with respect to the proofs “on the paper”. Moreover, this will allow for comparison with similar case studies developed in other approaches to the metatheoretical reasoning in higher-order abstract syntax.  相似文献   

2.
In this article, we describe a set of procedures and strategies for searching for proofs in logical systems based on the inference rule condensed detachment. The procedures and strategies rely on the derivation of proof sketches – sequences of formulas that are used as hints to guide the search for sound proofs. In the simplest case, a proof sketch consists of a subproof – key lemmas to prove, for example – and the proof is completed by filling in the missing steps. In the more general case, a proof sketch consists of a sequence of formulas sufficient to find a proof, but it may include formulas that are not provable in the current theory. We find that even in this more general case, proof sketches can provide valuable guidance in finding sound proofs. Proof sketches have been used successfully for numerous problems coming from a variety of problem areas. We have, for example, used proof sketches to find several new two-axiom systems for Boolean algebra using the Sheffer stroke.  相似文献   

3.
We prove that a regular language defined by a boolean combination of generalized Σ1-sentences built using modular counting quantifiers can be defined by a boolean combination of Σ1-sentences in which only regular numerical predicates appear. The same statement, with “Σ1” replaced by “first-order,” is equivalent to the conjecture that the nonuniform circuit complexity class ACC is strictly contained in NC1. The argument introduces some new techniques, based on a combination of semigroup theory and Ramsey theory, which may shed some light on the general case.  相似文献   

4.
In this paper we consider discrete-time piecewise affine systems with boolean inputs, outputs and states and show that they can be represented in a logic canonical form where the logic variables influence the switching between different submodels but not the continuous-valued dynamics. We exploit this representation for studying Lagrange stability and developing performance analysis procedures based on linear matrix inequalities. Moreover, by using arguments from dissipativity theory for non-linear systems, we generalize our approach to solve the H analysis problem.  相似文献   

5.
This paper is concerned with a proof-theoretic observation about two kinds of proof systems for regular cyclic objects. It is presented for the case of two formal systems that are complete with respect to the notion of “recursive type equality” on a restricted class of recursive types in μ-term notation. Here we show the existence of an immediate duality with a geometrical visualization between proofs in a variant of the coinductive axiom system due to Brandt and Henglein and “consistency-unfoldings” in a variant of a 'syntactic-matching' proof system for testing equations between recursive types due to Ariola and Klop.Finally we sketch an analogous result of a duality between a similar pair of proof systems for bisimulation equivalence on equational specifications of cyclic term graphs.  相似文献   

6.
This article provides additional evidence of the value of using an automated reasoning program as a research assistant. Featured is the use of Bill McCune's program OTTER to find proofs of theorems taken from the study of Moufang loops, but not just any proofs. Specifically, the proofs satisfy the property of purity. In particular, when given, say, four equivalent identities (which is the case in this article), one is asked to prove the second identity from the first, the third from the second, the fourth from the third, and the first from the fourth. If the proof that 1 implies 2 does not rely on 3 or 4, then by definition the proof is pure with respect to 3 and 4, or simply the proof is pure. If for the four identities one finds four pure proofs showing that 1 implies 2, 2 implies 3, 3 implies 4, and 4 implies 1, then by definition one has found a circle of pure proofs. By finding the needed twelve pure proofs, this article shows that there does exist a circle of pure proofs for the four equivalent identities for Moufang loops and for all orderings of the identities; however, for much of this article, the emphasis is on the first three identities. In addition — in part to promote the use of automated reasoning programs and to answer questions concerning the choice of options — featured here is the methodology that was employed and a discussion of some of the obstacles, some of which are subtle. The approach relies on paramodulation (which generalizes equality substitution), on demodulation, and — so crucial for attacking deep questions and hard problems — on various strategies, most important of which are the hot list strategy, the set of support strategy, and McCune's ratio strategy. To permit verification of the results presented here, extension of them, and application of the methodology to other unrelated fields, a sample input file and four proofs (relevant to a circle of pure proofs for the four identities) are included. Research topics and challenges are offered at the close of this article.This work was supported by the Mathematical, Information, and Computational Sciences Division subprogram of the Office of Computational and Technology Research, U.S. Department of Energy, under Contract W-31-109-Eng-38.  相似文献   

7.
Abstractions are important in specifying and proving properties of complex systems. To prove that a given automaton implements an abstract specification automaton, one must first find the correct abstraction relation between the states of the automata, and then show that this relation is preserved by all corresponding action sequences of the two automata. This paper describes tool support based on the PVS theorem prover that can help users accomplish the second task, in other words, in proving a candidate abstraction relation correct. This tool support relies on a clean and uniform technique for defining abstraction properties relating automata that uses library theories for defining abstraction relations and templates for specifying automata and abstraction theorems. The paper then describes how the templates and theories allow development of generic, high level PVS strategies that aid in the mechanization of abstraction proofs. These strategies first set up the standard subgoals for the abstraction proofs and then execute the standard initial proof steps for these subgoals, thus making the process of proving abstraction properties in PVS more automated. With suitable supplementary strategies to implement the “natural” proof steps needed to complete the proofs of any of the standard subgoals remaining to be proved, the abstraction proof strategies can form part of a set of mechanized proof steps that can be used interactively to translate high level proof sketches into PVS proofs. Using timed I/O automata examples taken from the literature, this paper illustrates use of the templates, theories, and strategies described to specify and prove two types of abstraction property: refinement and forward simulation.  相似文献   

8.
It is well-known that adding reflective reasoning can tremendously increase the power of a proof assistant. In order for this theoretical increase of power to become accessible to users in practice, the proof assistant needs to provide a great deal of infrastructure to support reflective reasoning. In this paper we explore the problem of creating a practical implementation of such a support layer.Our implementation takes a specification of a logical theory (which is identical to how it would be specified if we were simply going to reason within this logical theory, instead of reflecting it) and automatically generates the necessary definitions, lemmas, and proofs that are needed to enable the reflected meta-reasoning in the provided theory.One of the key features of our approach is that the structure of a logic is preserved when it is reflected. In particular, all variables, including meta-variables, are preserved in the reflected representation. This also allows the preservation of proof automation—there is a structure-preserving one-to-one map from proof steps in the original logic to proof step in the reflected logic.To enable reasoning about terms with sequent context variables, we develop a principle for context induction, called teleportation.This work is fully implemented in the MetaPRL theorem prover.  相似文献   

9.
The intended meaning of intuitionistic logic is explained by the Brouwer-Heyting-Kolmogorov (BHK) provability semantics which informally defines intuitionistic truth as provability and specifies the intuitionistic connectives via operations on proofs. The problem of finding an adequate formalization of the provability semantics and establishing the completeness of the intuitionistic logic Int was first raised by Gödel in 1933. This question turned out to be a part of the more general problem of the intended realization for Gödel's modal logic of provability S4 which was open since 1933. In this tutorial talk we present a provability realization of Int and S4 that solves both of these problems. We describe the logic of explicit provability (LP) with the atoms “t is a proof of F” and establish that every theorem of S4 admits a reading in LP as the statement about operations on proofs. Moreover, both S4 and Int are shown to be complete with respect to this realization. In addition, LP subsumes the λ-calculus, modal λ-calculus, combinatory logic and provides a uniform provability realization of modality and λ-terms.  相似文献   

10.
Formal checking at Register-Transfer Level (RTL) is currently a fundamental step in the design of hardware circuits. Most tools for formal checking, however, work at the boolean level, which is not expressive enough to capture the abstract, high level (e.g., structural, word level) information of RTL designs. Tools for formal checking are thus confronted with problems which are “flattened” down to boolean level, so that a predominant part of their computational effort is wasted in performing useless boolean search on the bitwise encoding of integer data and arithmetical operations. In this paper we present a way of encoding RTL constructs into SMT formulas, that is, boolean combinations of boolean variables and quantifier-free constraints in Integer Linear Arithmetic. Such formulas can be handled by the MathSAT tool (and others) directly, without flattening to boolean level, so that to reduce drastically the computational effort.We propose a mixed boolean/ILP encoding, in which control variables are encoded as boolean variables, datapath variables as integer variables; control constructs are handled as boolean combination of control variables and predicates over datapath variables, and datapath constructs are encoded, as much as possible, as linear arithmetical constraints over datapath variables.  相似文献   

11.
The problem of proving that two programs, in any reasonable programming language, are equivalent is well-known to be undecidable. In a formal programming system, in which the rules for equivalence are finitely presented, the problem of provable equivalence is semi-decidable. Despite this improved situation there is a significant lack of generally accepted automated techniques for systematically searching for a proof (or disproof) of program equivalence. Techniques for searching for proofs of equivalence often stumble on the formulation of induction and, of course, coinduction (when it is present) which are often formulated in such a manner as to require inspired guesses.There are, however, well-known program transformation techniques which do address these issues. Of particular interest to this paper are the deforestation techniques introduced by Phil Wadler and the fold/unfold program transformation techniques introduced by Burstall and Darlington. These techniques are shadows of an underlying cut-elimination procedure and, as such, should be more generally recognized as proof techniques.In this paper we show that these techniques apply to languages which have both inductive and coinductive datatypes. The relationship between these program transformation techniques and cut-elimination requires a transformation from initial and final “algebra” proof rules into “circular” proof rules as introduced by Santocanale (and used implicitly in the model checking community). This transformation is only possible in certain proof systems. Here we show that it can be applied to cartesian closed categories with datatypes: closedness is an essential requirement. The cut-elimination theorems and attendant program transformation techniques presented here rely heavily on this alternate presentation of induction and coinduction.  相似文献   

12.
It is well understood how to compute the average or centroid of a set of numeric values, as well as their variance. In this way we handle inconsistent measurements of the same property. We wish to solve the analogous problem on qualitative data: How to compute the “average” or consensus of a set of affirmations on a non-numeric fact, as reported for instance by different Web sites? What is the most likely truth among a set of inconsistent assertions about the same attribute?Given a set (a bag, in fact) of statements about a qualitative feature, this paper provides a method, based in the theory of confusion, to assess the most plausible value or “consensus” value. It is the most likely value to be true, given the information available. We also compute the inconsistency of the bag, which measures how far apart the testimonies in the bag are. All observers are equally credible, so differences arise from perception errors, due to the limited accuracy of the individual findings (the limited information extracted by the examination method from the observed reality).Our approach differs from classical logic, which considers a set of assertions to be either consistent (True, or 1) or inconsistent (False, or 0), and it does not use Fuzzy Logic.  相似文献   

13.
A challenging problem within machine learning is how to make good inferences from data sets in which pieces of information are missing. While it is valuable to have algorithms that perform well for specific domains, to gain a fundamental understanding of the problem, one needs a “theory” about how to learn with incomplete data. The important contribution of such a theory is not so much the specific algorithmic results, but rather that it provides good ways of thinking about the problem formally. In this paper we introduce the unspecified attribute value (UAV) learning model as a first step towards a theoretical framework for studying the problem of learning from incomplete data in the exact learning framework.In the UAV learning model, an example x is classified positive (resp., negative) if all possible assignments for the unspecified attributes result in a positive (resp., negative) classification. Otherwise the classification given to x is “?” (for unknown). Given an example x in which some attributes are unspecified, the oracle UAV-MQ responds with the classification of x. Given a hypothesis h, the oracle UAV-EQ returns an example x (that could have unspecified attributes) for which h(x) is incorrect.We show that any class of functions learnable in Angluin’s exact model using the MQ and EQ oracles is also learnable in the UAV model using the MQ and UAV-EQ oracles as long as the counterexamples provided by the UAV-EQ oracle have a logarithmic number of unspecified attributes. We also show that any class learnable in the exact model using the MQ and EQ oracles is also learnable in the UAV model using the UAV-MQ and UAV-EQ oracles as well as an oracle to evaluate a given boolean formula on an example with unspecified attributes. (For some hypothesis classes such as decision trees and unate formulas the evaluation can be done in polynomial time without an oracle.) We also study the learnability of a universal class of decision trees under the UAV model and of DNF formulas under a representation-dependent variation of the UAV model.  相似文献   

14.
We present a meta-logic that contains a new quantifier (for encoding “generic judgments”) and inference rules for reasoning within fixed points of a given specification. We then specify the operational semantics and bisimulation relations for the finite π-calculus within this meta-logic. Since we restrict to the finite case, the ability of the meta-logic to reason within fixed points becomes a powerful and complete tool since simple proof search can compute this one fixed point. The quantifier helps with the delicate issues surrounding the scope of variables within π-calculus expressions and their executions (proofs). We shall illustrate several merits of the logical specifications we write: they are natural and declarative; they contain no side conditions concerning names of variables while maintaining a completely formal treatment of such variables; differences between late and open bisimulation relations are easy to see declaratively; and proof search involving the application of inference rules, unification, and backtracking can provide complete proof systems for both one-step transitions and for bisimulation.  相似文献   

15.
A propositional proof system is automatizable if there is an algorithm that, given a tautology, produces a proof in time polynomial in the size of its smallest proof. This notion can be weakened if we allow the algorithm to produce a proof in a stronger system within the same time bound. This new notion is called weak automatizability. Among other characterizations, we prove that a system is weakly automatizable exactly when a weak form of the satisfiability problem is solvable in polynomial time. After studying the robustness of the definition, we prove the equivalence between: (i) Resolution is weakly automatizable, (ii) Res(k) is weakly automatizable, and (iii) Res(k) has feasible interpolation, when k>1. In order to prove this result, we show that Res(2) has polynomial-size proofs of the reflection principle of Resolution, which is a version of consistency. We also show that Res(k), for every k>1, proves its consistency in polynomial size, while Resolution does not. In fact, we show that Resolution proofs of its own consistency require almost exponential size. This gives a better lower bound for the monotone interpolation of Res(2) and a separation from Resolution as a byproduct. Our techniques also give us a way to obtain a large class of examples that have small Resolution refutations but require relatively large width. This answers a question of Alekhnovich and Razborov related to whether Resolution is automatizable in quasipolynomial-time.  相似文献   

16.
Some computationally hard problems, e.g., deduction in logical knowledge bases– are such that part of an instance is known well before the rest of it, and remains the same for several subsequent instances of the problem. In these cases, it is useful to preprocess off-line this known part so as to simplify the remaining on-line problem. In this paper we investigate such a technique in the context of intractable, i.e., NP-hard, problems. Recent results in the literature show that not all NP-hard problems behave in the same way: for some of them preprocessing yields polynomial-time on-line simplified problems (we call them compilable), while for other ones their compilability implies some consequences that are considered unlikely. Our primary goal is to provide a sound methodology that can be used to either prove or disprove that a problem is compilable. To this end, we define new models of computation, complexity classes, and reductions. We find complete problems for such classes, “completeness” meaning they are “the less likely to be compilable.” We also investigate preprocessing that does not yield polynomial-time on-line algorithms, but generically “decreases” complexity. This leads us to define “hierarchies of compilability,” that are the analog of the polynomial hierarchy. A detailed comparison of our framework to the idea of “parameterized tractability” shows the differences between the two approaches.  相似文献   

17.
Towards an algebraic theory of information integration   总被引:4,自引:0,他引:4  
Information integration systems provide uniform interfaces to varieties of heterogeneous information sources. For query answering in such systems, the current generation of query answering algorithms in local-as-view (source-centric) information integration systems all produce what has been thought of as “the best obtainable” answer, given the circumstances that the source-centric approach introduces incomplete information into the virtual global relations. However, this “best obtainable” answer does not include all information that can be extracted from the sources because it does not allow partial information. Neither does the “best obtainable” answer allow for composition of queries, meaning that querying a result of a previous query will not be equivalentto the composition of the two queries. In this paper, we provide a foundation for information integration, based on the algebraic theory of incomplete information. Our framework allows us to define the semantics of partial facts and introduce the notion of the exact answer—that is the answer that includes partial facts. We show that querying under the exact answer semantics is compositional. We also present two methods for actually computing the exact answer. The first method is tableau-based, and it is a generalization of the “inverse-rules” approach. The second, much more efficient method, is a generalization of the rewriting approach, and it is based on partial containment mappings introduced in the paper.  相似文献   

18.
We consider the equivalence relation in the space of time-invariant linear dynamical systems of the form
under the full group action of state feedback/output injection transformations. As in the case of the reduction of a matrix to its Jordan canonical form, the reduction of a quadruple (A,B,C,D) defining a system as above to its canonical reduced form is an unstable operation. Following V.I. Arnold’s techniques, the starting point for the study of local perturbations is to obtain a miniversal deformation of a differentiable family of quadruples. In this paper, a “real” miniversal deformation and some applications are shown.  相似文献   

19.
From the concepts of input chain and controllability chain introduced by Heymann and in view of the generalized eigenvector assignment, a new algebraic characterization of controllability subspaces (CSs) is developed. Compared with the previous ones based on matrix pencil theory, it presents both the advantages of being stated in the current base and of being derived without resorting to use of canonical forms. A practical algebraic algorithm is provided in order to compute the controllable structure of a given pair (A, B). This approach is then applied to characterize CSs contained in a given subspace and span CSs via eigenvector assignment.  相似文献   

20.
We study a finite horizon optimal control problem for a class of linear systems with a randomly varying time-delay. The systems of this type may arise in embedded control applications and in certain applications in economics. The delay value is treated as an unknown variable but with known statistical properties, modelled by a Markov process with a finite number of states. The “probabilistic delay averaging” approach is employed to determine the optimal control in the form which is independent of the delay value.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号