首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
We give a new characterization of elementary and deterministic polynomial time computation in linear logic through the proofs-as-programs correspondence. Girard’s seminal results, concerning elementary and light linear logic, achieve this characterization by enforcing a stratification principle on proofs, using the notion of depth in proof nets. Here, we propose a more general form of stratification, based on inducing levels in proof nets by means of indices, which allows us to extend Girard’s systems while keeping the same complexity properties. In particular, it turns out that Girard’s systems can be recovered by forcing depth and level to coincide. A consequence of the higher flexibility of levels with respect to depth is the absence of boxes for handling the paragraph modality. We use this fact to propose a variant of our polytime system in which the paragraph modality is only allowed on atoms, and which may thus serve as a basis for developing lambda-calculus type assignment systems with more efficient typing algorithms than existing ones.  相似文献   

2.
We study a probabilistic version of coherence spaces and show that these objects provide a model of linear logic. We build a model of the pure lambda-calculus in this setting and show how to interpret a probabilistic version of the functional language PCF. We give a probabilistic interpretation of the semantics of probabilistic PCF closed terms of ground type. Last we suggest a generalization of this approach, using Banach spaces.  相似文献   

3.
We present the Lambda Context Calculus. This simple lambda-calculus features variables arranged in a hierarchy of strengths such that substitution of a strong variable does not avoid capture with respect to abstraction by a weaker variable. This allows the calculus to express both capture-avoiding and capturing substitution (instantiation). The reduction rules extend the ‘vanilla’ lambda-calculus in a simple and modular way and preserve the look and feel of a standard lambda-calculus with explicit substitutions.Good properties of the lambda-calculus are preserved. The LamCC is confluent, and a natural injection into the LamCC of the untyped lambda-calculus exists and preserves strong normalisation.We discuss the calculus and its design with full proofs. In the presence of the hierarchy of variables, functional binding splits into a functional abstraction λ (lambda) and a name-binder и (new). We investigate how the components of this calculus interact with each other and with the reduction rules, with examples. In two more extended case studies we demonstrate how global state can be expressed, and how contexts and contextual equivalence can be naturally internalised using function application.  相似文献   

4.
This paper describes a formalisation of the lambda-calculus in a HOL-based theorem prover using nominal techniques. Central to the formalisation is an inductive set that is bijective with the alpha-equated lambda-terms. Unlike de-Bruijn indices, however, this inductive set includes names and reasoning about it is very similar to informal reasoning with “pencil and paper”. To show this we provide a structural induction principle that requires to prove the lambda-case for fresh binders only. Furthermore, we adapt work by Pitts providing a recursion combinator for the inductive set. The main technical novelty of this work is that it is compatible with the axiom of choice (unlike earlier nominal logic work by Pitts et al); thus we were able to implement all results in Isabelle/HOL and use them to formalise the standard proofs for Church-Rosser, strong-normalisation of beta-reduction, the correctness of the type-inference algorithm W, typical proofs from SOS and much more. This paper is a revised and much extended version of Urban and Berghofer [32], and Urban and Tasson [36].  相似文献   

5.
We give a semantic account of the execution time (i.e. the number of cut elimination steps leading to the normal form) of an untyped MELL net. We first prove that: (1) a net is head-normalizable (i.e. normalizable at depth 0) if and only if its interpretation in the multiset based relational semantics is not empty and (2) a net is normalizable if and only if its exhaustive interpretation (a suitable restriction of its interpretation) is not empty. We then give a semantic measure of execution time: we prove that we can compute the number of cut elimination steps leading to a cut free normal form of the net obtained by connecting two cut free nets by means of a cut-link, from the interpretations of the two cut free nets. These results are inspired by similar ones obtained by the first author for the untyped lambda-calculus.  相似文献   

6.
The motivating role of linear logic is as a “logic behind logic”. We propose a sibling role for it as a logic of transformational mathematics via the self-dual category of Chu spaces, a generalization of topological spaces. These create a bridge between linear logic and mathematics by soundly and fully completely interpreting linear logic while fully and concretely embedding a comprehensive range of concrete categories of mathematics. Our main goal is to treat each end of this bridge in expository detail. In addition, we introduce the dialectic lambda-calculus, and show that dinaturality semantics is not fully complete for the Chu interpretation of linear logic.  相似文献   

7.
8.
The Curry-Howard correspondence connects derivations in natural deduction with the lambda-calculus. Predicates are types, derivations are terms. This supports reasoning from assumptions to conclusions, but we may want to reason backwards; from the desired conclusion towards the assumptions. At intermediate stages we may have a partial derivation, with holes.This is natural in informal practice but it can be difficult to formalise. The informal act of filling holes in a partial derivation suggests a capturing substitution, since holes may occur in the scope of quantifier introduction rules. As other authors have observed, this is not immediately supported by the lambda-calculus. Also, universal quantification requires a ‘fresh name’ and it is not immediately obvious what formal meaning to assign to this notion if derivations are incomplete. Further issues arise with proof-normalisation; this corresponds with lambda-calculus reduction, which can require alpha-conversion to avoid capture when beta-reducing, and it is not immediately clear how to alpha-convert a name in an incomplete derivation. We apply a one-and-a-half level technique based on nominal terms to construct a Curry-Howard correspondence for first-order logic. This features two levels of variable, but with no lambda-abstraction at the second level. Predicates are types, derivations are terms, proof-normalisation is reduction — and the two levels of variable are, respectively, the assumptions and the holes of an incomplete derivation.We give notions of proof-term, typing, alpha-conversion and beta-reduction for our syntax. We prove confluence, we exhibit several admissible rules including a proof that instantiation of level two variables is type-safe — this corresponds with the act of filling holes in an incomplete derivation, and can be viewed as a form of Cut-rule — and we explore the connection with traditional Curry-Howard in the case that the derivation is in fact complete.Our techniques are not specifically tailored to first-order logic and the same ideas should be applicable without any essential new difficulties to similar logical systems.  相似文献   

9.
We introduce a class of Petri nets, simple logic Petri nets (SLPN), that are based on logical expressions. We show how this type of nets can be efficiently mapped into logic programs with negation: the corresponding answer sets describe interleaved executions of the underlying nets (Theorem 1). The absence of an answer set indicates a deadlock situation. We also show how to correctly model and specify AgentSpeak agents and multi-agent systems with SLPN’s (Theorem 2). Both theorems allow us to solve the task of model checking AgentSpeak multi-agent systems by computing answer sets of the obtained logic program with any ASP system.  相似文献   

10.
This paper presents a framework for giving a compositional theory of Petri nets using category theory. An integral part of our approach is the use of linear logic in specifying and reasoning about Petri nets. We construct categories of nets based on V. C. V. de Paiva′s dialectica category models of linear logic [in "Proc. Category Theory and Computer Science, Manchester" (D. H. Pitt, D. E. Rydeheard, P. Dybjer, A. M. Pitts, and A. Poigné, Eds.), Lecture Notes in Computer Science, Vol. 389, Springer-Verlag, Berlin/New York, 1989] and exploit the structure of de Paiva′s models to give constructions on nets. We compare our categories of nets with others in the literature, and show how some of the most widely-studied categories can be expressed within our framework. Taking a category of elementary nets as an example we show how this approach yields both existing and novel constructions on nets and discuss their computational interpretation.  相似文献   

11.
We present a particularly simple lazy lambda-calculus machine, which was introduced twenty-five years ago. It has been, since, used and implemented by several authors, but remained unpublished. We also build an extension, with a control instruction and continuations. This machine was conceived in order to execute programs obtained from mathematical proofs, by means of the Curry-Howard (also known as “proof-program”) correspondence. The control instruction corresponds to the axiom of excluded middle.  相似文献   

12.
Agent systems based on the Belief, Desire and Intention model of Rao and Georgeff have been used for a number of successful applications. However, it is often difficult to learn how to apply such systems, due to the complexity of both the semantics of the system and the computational model. In addition, there is a gap between the semantics and the concepts that are presented to the programmer. In this paper we address these issues by re-casting the foundations of such systems into a logic programming framework. In particular we show how the integration of backward- and forward-chaining techniques for linear logic provides a natural starting point for this investigation. We discuss how the integrated system provides for the interaction between the proactive and reactive parts of the system, and we discuss several aspects of this interaction. In particular, one perhaps surprising outcome is that goals and plans may be thought of as declarative and procedural aspects of the same concept. We also discuss the language design issues for such a system, and particularly the way in which the potential choices for rule evaluation in a forward-chaining manner is crucial to the behaviour of the system.  相似文献   

13.
The purpose of this paper is to demonstrate how Lafont's interaction combinators, a system of three symbols and six interaction rules, can be used to encode linear logic. Specifically, we give a translation of the multiplicative, exponential, and additive fragments of linear logic together with a strategy for cut-elimination which can be faithfully simulated. Finally, we show briefly how this encoding can be used for evaluating λ-terms. In addition to offering a very simple, perhaps the simplest, system of rewriting for linear logic and the λ-calculus, the interaction net implementation that we present has been shown by experimental testing to offer a good level of sharing in terms of the number of cut-elimination steps (resp. β-reduction steps). In particular it performs better than all extant finite systems of interaction nets.  相似文献   

14.
交互作用网是Lafont于1990年在POPL会议上提出的一种程序设计语言.本文我们从证明和程序的关系出发,使用线性逻辑作为一种集成逻辑讨论交互作用网的理论性质,得到下述结论:·网上结点辅助端口的划分可表示成相应类型的张量积;·网上的计算等价于线性矢列演算中Principal-Cut的消去;·对于任何一个交互作用网,如果存在一个线性矢列演算与之对应,则该网是简单的.  相似文献   

15.
There is a correspondence between classical logic and programming language calculi with first-class continuations. With the addition of control delimiters, the continuations become composable and the calculi become more expressive. We present a fine-grained analysis of control delimiters and formalise that their addition corresponds to the addition of a single dynamically-scoped variable modelling the special top-level continuation. From a type perspective, the dynamically-scoped variable requires effect annotations. In the presence of control, the dynamically-scoped variable can be interpreted in a purely functional way by applying a store-passing style. At the type level, the effect annotations are mapped within standard classical logic extended with the dual of implication, namely subtraction. A continuation-passing-style transformation of lambda-calculus with control and subtraction is defined. Combining the translations provides a decomposition of standard CPS transformations for delimited continuations. Incidentally, we also give a direct normalisation proof of the simply-typed lambda-calculus with control and subtraction.  相似文献   

16.
Intersection types discipline allows to define a wide variety of models for the type free lambda-calculus, but the Curry–Howard isomorphism breaks down for this kind of type systems. In this paper we show that the correspondence between types and suitable logical formulas can still be recovered appealing to the fact that there is a strict connection between the semantics for lambda-calculus induced by the intersection types and a Kripke-style semantics for modal and relevant logics. Indeed, we present a modal logic hinted by the analysis of the sub-typing relation for intersection types, and we show that the deduction relation for such a modal system is a conservative extension of the relation of sub-typing. Then, we define a Kripke-style semantics for the formulas of such a system, present suitable sequential calculi, prove a completeness theorem and give a syntactical proof of the cut elimination property. Finally, we define a decision procedure for theorem-hood and we show that it yields the finite model property and cut-redundancy.  相似文献   

17.
This article is essentially an extended review with historicalcomments. It looks at an algorithm published in 1943 by M. H.A. Newman, which decides whether a lambda-calculus term is typablewithout actually computing its principal type. Newman's algorithmseems to have been completely neglected by the type-theoristswho invented their own rather different typability algorithmsover 15 years later.  相似文献   

18.
林闯  刘婷  曲扬 《计算机学报》2001,24(12):1299-1309
针对点-时段时序逻辑的不足,提出了一种新的时段时序逻辑--扩展时段时序逻辑,对不确定时间段发生的事件具有较好的描述能力。时间Petri网模型表示的引入,增强了扩展时段时序逻辑的描述直观性及分析能力,为进行线性推理提供了有利的工具。同时还提出了几种变迁间的实施推理规则。运用这些规则可以简化复杂时序关系的Petri网模型,并在线性时间复杂度内定量地得到各变迁间的时序逻辑关系,因而是一种行这有效的方法。  相似文献   

19.
一种基于线性逻辑的Petri网分析方法   总被引:2,自引:0,他引:2  
1 引言 Petri网是一种用网状图形表示系统模型的方法,能够从组织结构、控制和管理的角度,精确描述系统中事件(变迁)之间的依赖(顺序)和不依赖(并发)关系。Petri网理论提供了强大的分析方法,如不变量分析、系统性能分析(如活性)等以证明系统的正确性。近年来Petri网也被用来表示知识推理,例如用于诊断和监控,这就需要发展一种关于Petri网行为的推理主题。有些学者把经典逻辑和Petri网相结合,用Petri网表示产生式规则系统,网的框架代表基于产生式规则  相似文献   

20.
Inspired by the Multiplicative Exponential fragment of Linear Logic, we define a framework called the prismoid of resources where each vertex is a language which refines the λ-calculus by using a different choice to make explicit or implicit (meta-level) the definition of the contraction, weakening, and substitution operations. For all the calculi in the prismoid we show simulation of β-reduction, confluence, preservation of β-strong normalisation and strong normalisation for typed terms. Full composition also holds for all the calculi of the prismoid handling explicit substitutions. The whole development of the prismoid is done by making the set of resources a parameter of the formalism, so that all the properties for each vertex are obtained as a particular case of the general abstract proofs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号