首页 | 本学科首页   官方微博 | 高级检索  
 共查询到20条相似文献,搜索用时 31 毫秒
Higher-order representations of objects such as programs, proofs, formulas, and types have become important to many symbolic computation tasks. Systems that support such representations usually depend on the implementation of an intensional view of the terms of some variant of the typed lambda calculus. New notations have been proposed for the lambda calculus that provide an excellent basis for realizing such implementations. There are, however, several choices in the actual deployment of these notations the practical consequences of which are not currently well understood. We attempt to develop such an understanding here by examining the impact on performance of different combinations of the features afforded by such notations. Among the facets examined are the treatment of bound variables, eagerness and laziness in substitution and reduction, the ability to merge different structure traversals into one, and the virtues of annotations on terms that indicate their dependence on variables bound by external abstractions. We complement qualitative assessments with experiments conducted by executing programs in a language that supports an intensional view of lambda terms while varying relevant aspects of the implementation of the language. Our study provides insights into the preferred approaches to representing and reducing lambda terms and also exposes characteristics of computations that have a somewhat unanticipated effect on performance.  相似文献   

The calculus c serves as a general framework for representing contexts. Essential features are control over variable capturing and the freedom to manipulate contexts before or after hole filling, by a mechanism of delayed substitution. The context calculus c is given in the form of an extension of the lambda calculus. Many notions of context can be represented within the framework; a particular variation can be obtained by the choice of a pretyping, which we illustrate by three examples.  相似文献   

A new complete characterization of β-strong normalization is given, both in the classical and in the lazy λ-calculus, through the notion of potential valuability inside two suitable parametric calculi.  相似文献   

The notion of confluence is studied on the context of bigraphs. Confluence will be important in modelling real-world systems, both natural (as in biology) and artificial (as in pervasive computing). The paper uses bigraphs in which names have multiple locality; this enables a formulation of the lambda calculus with explicit substitutions. The paper reports work in progress, seeking conditions on a bigraphical reactive system that are sufficient to ensure confluence; the conditions must deal with the way that bigraphical redexes can be intricately intertwined. The conditions should also be satisfied by the lambda calculus. After discussion of these issues, two conjectures are put forward.  相似文献   

This paper defines reduction on derivations in the strict intersection type assignment system of [2], by generalising cutelimination, and shows a strong normalisation result for this reduction. Using this result, new proofs are given for the approximation theorem and the characterisation of normalisability using intersection types.  相似文献   

We study Newman’s typability algorithm (Newman, 1943) [14] for simple type theory. The algorithm originates from 1943, but was left unnoticed until (Newman, 1943) [14] was recently rediscovered by Hindley (2008) [10]. The remarkable thing is that it decides typability without computing a type. We give a modern presentation of the algorithm (also a graphical one), prove its correctness and show that it implicitly does compute the principal type. We also show how the typing algorithm can be extended to other type constructors. Finally we show that Newman’s algorithm actually includes a unification algorithm.  相似文献   

ABSTRACT,C-T-ABS是把λ表达式转换成SKI表达式的抽象算法,是函数式语言实现的理论基础。本文从λ演算,SKI演算的基本理论出发,对这两个算法进行了理论推导及功能的等价证明。展示了对一输入,C-T-ABS能生成较优化的代码,并对它们生成代码的长度及归约效率进行了比较。  相似文献   

We present a simple implementation of weak head reduction in the λ-calculus with interaction nets using package duplication.  相似文献   

KRISP is a representation system and set of interpretation protocols that is used in the Sparser natural language understanding system to embody the meaning of texts and their pragmatic contexts. It is based on a denotational notion of semantic interpretation, where the phrases of a text are directly projected onto a largely pre-existing set of individuals and categories in a model, rather than first going through a level of symbolic representation such as a logical form. It defines a small set of semantic object types, grounded in the lambda calculus, and it supports the principle of uniqueness and supplies first class objects to represent partially-saturated relationships.KRISP is being used to develop a core set of concepts for such things as names, amounts, time, and modality, which are part of a few larger models for domains including Who's News and joint ventures. It is targeted at the task of information extraction, emphasizing the need to relate entities mentioned in new texts to a large set of pre-defined entities and those read about in earlier articles or in the same article.  相似文献   

This paper describes theoretical and practical aspects of a partial evaluator that treats a parallel lambda language.The parallel language presented is a combination of lambda calculus and message passing communication mechanism.This parallel language can be used to write a programming language‘s denotational semantics which extracts the paallelism in the program.From this denotational definition of the programming language,the partial evaluator can generate parallel compiler of the language by self-application. The key technique of partial evaluation is binding time analysis that determines in advance which parts of the source program can be evaluated during partial evaluation,and which parts cannot,A binding time analysis is described based upon type inference.A new type chcode in introduced into the type system,which denotes the type of those expressions containing residual channel operations.A well-formedness criterion is given which ensures that partial evaluation not only doesn‘t commit type errors but also doesn‘t change the sequence of channel operations.Before binding time analysis,channel analysis is used to analyze the communication relationship between send and receive processes.  相似文献   

对经典概念格、粗糙概念格的分析表明,其概念外延或者具有全部属性,或者只具备一个属性,从而造成所提取关联规则支持度和可信度严重下降。为此提出一种新的概念格结构——区间概念格Lαβ(Mα,Mβ,Y),其概念外延是区间[α,β](0≤α≤β≤1)范围内满足内涵属性的对象集。证明了当α=β=1时,区间概念格退化为经典概念格;当β=1,α>0时,区间概念格退化为粗糙概念格;其次,给出了区间概念格中概念度量的精度、覆盖度等概念,并给出了相关性质;接着,证明了区间概念格具有的一些独特性质;然后,初步给出了构造区间概念格的方法;最后,通过实例证明了区间概念格提出的必要性和实用性。  相似文献   

We present a semi-decision algorithm for the unifiability of two set-theoretic formulas modulo -reduction. The algorithm is based on the approach developed by G. Huet for type theory, but requires additional measures because formulas in set theory are not all normalizable. We present the algorithm in an Ada-like pseudocode, and then prove two theorems that show the completeness and correctness of the procedure. We conclude by showing that -unification is not a complete quantifier substitution method for set theory-unlile first-order unification and first-order logic. In this respect set theory is similar to type theory (higher-order logic).This material is based upon work supported by the National Science Foundation under award number ISI-8560438. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the author and do not necessarily reflect the views of the National Science Foundation.  相似文献   

利用概念格来实现不确定性推理的过程中,给出了一个具体的语言真值格蕴涵代数的完备结构;作为概念格的扩充理论,提出了用于处理不确定性信息的语言真值概念格,并基于语言真值概念格给出了内逼近不确定性推理规则和外逼近不确定性推理规则,进而验证了这两种规则的还原性。  相似文献   

A Linear Spine Calculus   总被引:1,自引:0,他引:1  

分析A*算法耗时多的基础上,针对性地提出Lambda*算法,通过减少open表中保持的节点数,减少计算量,算法能在较少的时间里得到较优的路径。相对于A*算法,采用Lambda*进行路径规划,在2D环境下时耗减少了48.76%,在3D环境下时耗减少了30.11%。即使在复杂的3D环境中,Lambda*算法也能较快地获取较优的路径规划方案,更能适应现代工业机器人的快速路径规划的需求。  相似文献   

对象格产生概念格   总被引:1,自引:0,他引:1       下载免费PDF全文
通过在对象集内引入两个偏序关系≤′和■′及一种新的交运算∩′来建立改进的对象格,然后通过此对象格产生概念格。此概念格与由通常对象集合的交运算∩产生的概念格相比,其产生的概念节点不仅包括由通常对象集合的交运算∩产生的概念格的子节点的对象,还包括产生的新对象。  相似文献   

将Molodtsov于1999年提出的软集理论应用到格论中,提出了软格和软子格的概念,讨论了它们相关的性质。另外,基于-软集和q-软集的概念,讨论了理想软格和滤子软格的一些性质。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号