首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Full first-order linear logic can be presented as an abstract logic programming language in Miller's system Forum, which yields a sensible operational interpretation in the ‘proof search as computation’ paradigm. However, Forum still has to deal with syntactic details that would normally be ignored by a reasonable operational semantics. In this respect, Forum improves on Gentzen systems for linear logic by restricting the language and the form of inference rules. We further improve on Forum by restricting the class of formulae allowed, in a system we call G-Forum, which is still equivalent to full first-order linear logic. The only formulae allowed in G-Forum have the same shape as Forum sequents: the restriction does not diminish expressiveness and makes G-Forum amenable to proof theoretic analysis. G-Forum consists of two (big) inference rules, for which we show a cut elimination procedure. This does not need to appeal to finer detail in formulae and sequents than is provided by G-Forum, thus successfully testing the internal symmetries of our system.  相似文献   

2.
Models specified in the language of basic protocols are considered. These models are attribute transition systems, and their states are defined by formulas of multisort first-order predicate calculus over system attributes. Attributes of simple numeric and symbolic types, functional types, and queues are allowed. Assignment operators, queue update operators, and arbitrary formulas are used in postconditions of basic protocols. To pass from one state to another, a predicate transformer is constructed as a function of formula transformation. The following main property of the predicate transformer is proved: it calculates the strongest postcondition for symbolic states.  相似文献   

3.
可视化语言是以图符为语素的语言,其语句是由可视化对象组成的,故指令的编写方式不同于传统字符语言的文本编辑方式,需要为其设计一种特殊的可视化语句编辑器,让用户来编写可视化语句。对VSQL的程序编辑器(Mach机)进行定义。介绍了:Mach的工作流程,并用一阶时态逻辑描述其行为,较好地说明了:Mach机互操作对象之间的关系,解决了Mach机复用问题,为VSQL实现可视化程序设计方法奠定了基础。  相似文献   

4.
An explanation for the uncertain progress of formalist linguistics is sought in an examination of the concept of syntax. The idea of analyzing language formally was made possible by developments in 20th century logic. It has been pointed out by many that the analogy between natural language and a formal system may be imperfect, but the objection made here is that the very concept of syntax, when applied to any non-abstract system of communication, is flawed as it is commonly used. Syntax is properly defined with respect to an individual transformation rule that might be applied to some message. Collections of syntax rules, however, are inevitably due to categories imposed by an observer, and do not correspond to functional features found in non-abstract systems. As such, these categories should not be relied upon as aids to understanding any natural system.  相似文献   

5.
We present an extension of first-order term rewriting systems. It involves variable binding in the term language. We develop systems called binding term rewriting systems (BTRSs) in a stepwise manner. First we present the term language, then formulate equational logic. Finally, we define rewriting systems. This development is novel because we follow the initial algebra approach in an extended notion of Σ-algebras in various functor categories. These are based on Fiore-Plotkin-Turi’s presheaf semantics of variable binding and Lüth-Ghani’s monadic semantics of term rewriting systems. We characterise the terms, equational logic and rewrite systems for BTRSs as initial algebras in suitable categories. Then, we show an important rewriting property of BTRSs: orthogonal BTRSs are confluent. Moreover, by using the initial algebra semantics, we give a complete characterisation of termination of BTRSs. Finally, we discuss our design choice of BTRSs from a semantic perspective. An erlier version appeared in Proc. Fifth ACM-SIGPLAN International Conference on Principles and Practice of Declarative Programming (PPDP2003).  相似文献   

6.
Fragments of Language   总被引:2,自引:1,他引:2  
By a fragment of a natural language we mean a subset of thatlanguage equipped with semantics which translate its sentences intosome formal system such as first-order logic. The familiar conceptsof satisfiability and entailment can be defined for anysuch fragment in a natural way. The question therefore arises, for anygiven fragment of a natural language, as to the computational complexityof determining satisfiability and entailment within that fragment. Wepresent a series of fragments of English for which the satisfiabilityproblem is polynomial, NP-complete, EXPTIME-complete,NEXPTIME-complete and undecidable. Thus, this paper represents a casestudy in how to approach the problem of determining the logicalcomplexity of various natural language constructions. In addition, wedraw some general conclusions about the relationship between naturallanguage and formal logic.  相似文献   

7.
In classical database theory, relational calculus has long been used in expressing query formulae and integrity constraints. In fact, relational calculus formulae are much easier to deal with than first-order formulae when evaluating queries and validating database updates in the database environment. In deductive databases, however, first-order calculus is preferred because it is convenient when proof procedures are involved. Since both situations should coexist in advanced information systems, it is very desirable to devise a conversion procedure between relational calculus and first-order calculus. In this paper, interpretation of first-order formulae in the database environment is discussed first, then tuple calculus, an extension of relational calculus, is presented. This extension enables us to describe query formulae and general rules necessary in advanced information systems, in particular, dealing with complex objects. Finally, a conversion algorithm from first-order formulae into tuple calculus formulae is presented. Several application issues are also included.  相似文献   

8.
Resultants are defined in the sparse (or toric) context in order to exploit the structure of the polynomials as expressed by their Newton polytopes. Since determinantal formulae are not always possible, the most efficient general method for computing resultants is rational formulae. This is made possible by Macaulay’s famous determinantal formula in the dense homogeneous case, extended by D’Andrea to the sparse case. However, the latter requires a lifting of the Newton polytopes, defined recursively on the dimension. Our main contribution is a single-lifting function of the Newton polytopes, which avoids recursion, and yields a simpler method for computing Macaulay-type formulae of sparse resultants. We focus on the case of generalized unmixed systems, where all Newton polytopes are scaled copies of each other, and sketch how our approach may extend to mixed systems of up to four polynomials, as well as those whose Newton polytopes have a sufficiently different face structure. In the mixed subdivision used to construct the matrices, our algorithm defines significantly fewer cells than D’Andrea’s, though the matrix formulae are same. We discuss asymptotic complexity bounds and illustrate our results by fully studying a bivariate example.  相似文献   

9.
Lexical knowledge is increasingly important in information systems—for example in indexing documents using keywords, or disambiguating words in a query to an information retrieval system, or a natural language interface. However, it is a difficult kind of knowledge to represent and reason with. Existing approaches to formalizing lexical knowledge have used languages with limited expressibility, such as those based on inheritance hierarchies, and in particular, they have not adequately addressed the context-dependent nature of lexical knowledge. Here we present a framework, based on default logic, called the dex framework, for capturing context-dependent reasoning with lexical knowledge. Default logic is a first-order logic offering a more expressive formalisation than inheritance hierarchies: (1) First-order formulae capturing lexical knowledge about words can be inferred; (2) Preferences over formulae can be based on specificity, reasoning about exceptions, or explicit priorities; (3) Information about contexts can be reasoned with as first-order formulae formulae; and (4) Information about contexts can be derived as default inferences. In the dex framework, a word for which lexical knowledge is sought is called a query word. The context for a query word is derived from further words, such as words in the same sentence as the query word. These further words are used with a form of decision tree called a context classification tree to identify which contexts hold for the query word. We show how we can use these contexts in default logic to identify lexical knowledge about the query word such as synonyms, antonyms, specializations, meronyms, and more sophisticated first-order semantic knowledge. We also show how we can use a standard machine learning algorithm to generate context classification trees.  相似文献   

10.
Quantitative Separation Logic and Programs with Lists   总被引:1,自引:0,他引:1  
This paper presents an extension of a decidable fragment of Separation Logic for singly-linked lists, defined by Berdine et al. (2004). Our main extension consists in introducing atomic formulae of the form ls k (x, y) describing a list segment of length k, stretching from x to y, where k is a logical variable interpreted over positive natural numbers, that may occur further inside Presburger constraints. We study the decidability of the full first-order logic combining unrestricted quantification of arithmetic and location variables. Although the full logic is found to be undecidable, validity of entailments between formulae with the quantifier prefix in the language $* {$\mathbbN, "\mathbbN}*\exists^* \{\exists_{\bf \mathbb{N}}, \forall_{\bf \mathbb{N}}\}^* is decidable. We provide here a model theoretic method, based on a parametric notion of shape graphs. We have implemented our decision technique, providing a fully automated framework for the verification of quantitative properties expressed as pre- and post-conditions on programs working on lists and integer counters.  相似文献   

11.
We prove a dichotomy theorem for the rank of propositional contradictions, uniformly generated from first-order sentences, in both the Lovász-Schrijver (LS) and Sherali-Adams (SA) refutation systems. More precisely, we first show that the propositional translations of first-order formulae that are universally false, that is, fail in all finite and infinite models, have LS proofs whose rank is constant, independent of the size of the (finite) universe. In contrast to that, we prove that the propositional formulae that fail in all finite models, but hold in some infinite structure, require proofs whose SA rank grows polynomially with the size of the universe. Until now, this kind of so-called complexity gap theorem has been known for tree-like Resolution and, in somehow restricted forms, for the Resolution and Nullstellensatz systems. As far as we are aware, this is the first time the Sherali-Adams lift-and-project method has been considered as a propositional refutation system (since the conference version of this paper, SA has been considered as a refutation system in several further papers). An interesting feature of the SA system is that it simulates LS, the Lovász-Schrijver refutation system without semi-definite cuts, in a rank-preserving fashion.  相似文献   

12.
A programming system is a language made from a fixed class of data abstractions and a selection of familiar deterministic control and assignment constructs. It is shown that the sets of all ‘before-after’ first-order assertions which are true of programs in any such language can uniquely determine the input-output semantics of the language providing one allows the use of auxiliary operators on its ground types.After this, we study programming systems wherein the data types are syntactically defined using a first-order specification language with the objective of eliminating these auxiliary operators. Especial attention is paid to algebraic specifications, complete first-order specifications; and to arithmetical computation in the context of a specified programming system.  相似文献   

13.
The use of good (formal) specifications of software components produces a good characterization of their functionalities. This fact eases the modifiability of a module specification due to a change of requirements or to the need of an enhancement of its functionalities after the delivery to the users, and then, in general, eases also the reusability of a module. Moreover, if existing modules have a specification not based on a natural language but on the well-given syntax of a specification language, it is possible to process the needed modifications in an automatic way. This work presents a method to derive the modifications of the specification of an existing module (given by means of the LOTOS language) from a characterization of the environment, in which the new module has to be used, given by means of temporal logic formulae. The method consists of tableau-based rules that build the due modifications, and always produces a fitting solution for formulae that can be satisfied and belong to a given class.  相似文献   

14.
We consider a first-order property specification language for run-time monitoring of dynamic systems. The language is based on a linear-time temporal logic and offers two kinds of quantifiers to bind free variables in a formula. One kind contains the usual first-order quantifiers that provide for replication of properties for dynamically created and destroyed objects in the system. The other kind, called attribute quantifiers, is used to check dynamically changing values within the same object. We show that expressions in this language can be efficiently checked over an execution trace of a system.  相似文献   

15.
The well-known term model constructions for equational abstract data type specifications provide a basis for elementary semantic reasoning, but their models lack the structure necessary for reasoning about relationships between the elements. Based upon the identities supplied in the data type's specification, we define a congruence relation which describes the relative ‘answer producing’ behaviour of words in the data type language, leading to a partially ordered qoutient algebra model. For those data types whose identity sets satisfy a condition implying the Church-Rosser (confluence) property, Wadsworth's approximation property holds: the meaning of a word in the model is the least upper bound of the meanings of its ‘syntactic approximants’. Thus the model provides fixed point properties while remaining fully abstract.  相似文献   

16.
Aspect Oriented Programming can arbitrarily distort the semantics of programs. In particular, weaving can invalidate crucial safety and liveness properties of the base program. In this article, we identify categories of aspects that preserve some classes of properties. Specialized aspect languages are then designed to ensure that aspects belong to a specific category and, therefore, that woven programs will preserve the corresponding properties.Our categories of aspects, inspired by Katz’s, comprise observers, aborters, confiners and weak intruders. Observers introduce new instructions and a new local state but they do not modify the base program’s state and control-flow. Aborters are observers which may also abort executions. Confiners only ensure that executions remain in the reachable states of the base program. Weak intruders are confiners between two advice executions. These categories (along with two others) are defined formally based on a language independent abstract semantics framework. The classes of preserved properties are defined as subsets of LTL for deterministic programs and CTL* for non-deterministic ones. We can formally prove that, for any program, the weaving of any aspect in a category preserves any property in the related class.We present, for most aspect categories, a specialized aspect language which ensures that any aspect written in that language belongs to the corresponding category. It can be proved that these languages preserve the corresponding classes of properties by construction. The aspect languages share the same expressive pointcut language and are designed w.r.t. a common imperative base language.Each category and language is illustrated by simple examples. The appendix provides semantics and two instances of proofs: the proof of preservation of properties by a category and the proof that all aspects written in a language belong to the corresponding category.  相似文献   

17.
形式规约使用形式语言构建所开发的软硬件系统的规约,刻画系统的模型和性质。其中,性质规约中的分支时间规约对于系统验证有着非常重要的作用。在经典情形下,系统性质规约是基于二值逻辑的,不能描述不一致或不确定的信息。因此,将其推广到模糊逻辑背景下,有助于对模糊系统进行形式验证。文中首先给出了性质规约中分支时间属性在模糊背景下的形式化定义,重点研究了其中的安全性和活性;然后,定义了两种闭包操作,从而产生了4种类型的属性,即泛安全性、泛活性、存在安全性和存在活性;最后,证明了每个分支时间属性,或是存在安全性和存在活性的交,或是泛安全性和泛活性的交,或是存在安全性和泛活性的交。  相似文献   

18.
Paranormality is an observation property of a language, in which the occurrence of unobservable events never exits the closure of the language. In this paper, a synthesis method is proposed to construct a paranormal supervisor. We propose a method to construct a controllable language such that the occurrence of unobservable events does not exit the closure of the controllable language. Moreover, a new observation property, that is, Quasi Output Control Consistency (QOCC) is defined to construct the optimal (least restrictive) non‐blocking decentralized supervisory control in the presence of unobservable controllable events. Using QOCC and natural observer properties, we propose a method to construct a normal supervisor such that an arbitrary pair of lookalike strings are initiated and terminated with identical observable and uncontrollable events. It is assumed that one of these strings has unobservable controllable events. An OCC property is defined in the literature as a special case of QOCC property, where none of the lookalike strings has unobservable controllable events.  相似文献   

19.
A formal, foundational approach to autonomous knowledge acquisition is presented. In particular, "learning from examples" and "learning from being told" and the relation of these approaches to first-order representation systems are investigated. It is assumed initially that the only information available for acquisition is a stream of facts, or ground atomic formulae, describing a domain. On the basis of this information, hypotheses expressed in set-theoretic terms and concerning the application domain may be proposed. As further instances are received, the hypothesized relations may be modified or discarded, and new relations formed. The intent though is to characterize those hypotheses that may potentially be formed, rather than to specify the subset of the hypotheses that, for whatever reason, should be held.
Formal systems are derived by means of which the set of potential hypotheses is precisely specified, and a procedure is derived for restoring the consistency of a set of hypotheses after conflicting evidence is encountered. In addition, this work is extended to where a learning system may be "told" arbitrary sentences concerning a domain. Included in this is an investigation of the relation between acquiring knowledge and reasoning deductively. However, the interaction of these approaches leads to immediate difficulties which likely require informal, pragmatic techniques for their resolution. The overall framework is intended both as a foundation for investigating autonomous approaches to learning and as a basis for the development of such autonomous systems.  相似文献   

20.
Balashov  Yuri 《Minds and Machines》2020,30(3):349-383

The rapid development of natural language processing in the last three decades has drastically changed the way professional translators do their work. Nowadays most of them use computer-assisted translation (CAT) or translation memory (TM) tools whose evolution has been overshadowed by the much more sensational development of machine translation (MT) systems, with which TM tools are sometimes confused. These two language technologies now interact in mutually enhancing ways, and their increasing role in human translation has become a subject of behavioral studies. Philosophers and linguists, however, have been slow in coming to grips with these important developments. The present paper seeks to fill in this lacuna. I focus on the semantic aspects of the highly distributed human–computer interaction in the CAT process which presents an interesting case of an extended cognitive system involving a human translator, a TM tool, an MT engine, and sometimes other human translators or editors. Considered as a whole, such a system is engaged in representing the linguistic meaning of the source document in the target language. But the roles played by its various components, natural as well as artificial, are far from trivial, and the division of linguistic labor between them throws new light on the familiar notions that were initially inspired by rather different phenomena in the philosophy of language, mind, and cognitive science.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号