共查询到20条相似文献,搜索用时 31 毫秒
1.
逻辑程序的语义问题 I 总被引:1,自引:0,他引:1
文明性语义是逻辑程序研究的重要内容,也是其作为说明性程序的基本特征。近年来由于人们对带否定前提了一般逻辑程序设计的关注,以及逻辑程序与非单调推理的结合,逻辑程序的语义研究出现了许多新结果。本文以早期的Clark语义和最小模型语义为起点,介绍这些新发展的部分内容,主要包括Clark语义的Fitting3-值扩充,理想模型语义,稳定模型语义,良基模型语义以及它们之间的关系,并在此基础上进一步讨论了说明 相似文献
2.
刘海燕 《计算机应用与软件》1998,(3):1-8,19
本文定义了一个多context逻辑结构。MCO在几个方面推广了传统的一阶逻辑:每个context相关一个理论;Context间存在outer关系; 相似文献
3.
将逻辑程序系统由一阶逻辑中解放出来是复杂对象推理所迫切需要的.本文提出扩展的逻辑程序语言HILOG并建立其语义理论,扩充了逻辑程序系统的形式语义.HILOG为一带类型语言.通过建立诸如p-包含及装配/拆卸等代数概念,本文扩充了逻辑满足及模型比较等概念,证明了一HILOG程序具有一最小模型闭包及标准(装配形式)最小模型的唯一性,提出了模型的p-相交定理及扩充的最小模型极小不动点性质.一阶逻辑程序语义在上述各方面恰为HILOG语义之特殊情况,二者语义上的联系提供了将HILOG程序映射到一阶系统的可能性. 相似文献
4.
逻辑程序的语义问题(Ⅱ) 总被引:1,自引:0,他引:1
下面几节讨论模型论途径的说明语义,可以克服Clark语义的上述缺点。 4.最小模型语义最小模型语义仅适合于正逻辑程序。我们非常熟悉的基于Horn逻辑的Prolog程序采用这种语义规则。为了更好地理解逻辑程序的模型语义的新发展,这里首先考虑最小模型语义。本节仅涉及2-值解释。 相似文献
5.
6.
动态模糊逻辑程序设计语言的指称语义 总被引:1,自引:0,他引:1
文献[8]借鉴Dijkstra的监督命令程序结构,给出了动态模糊逻辑程序设计语言的基本框架结构.在此基础上,进一步扩充和完善,并根据指称语义的原理和方法,用结构归纳法给出动态模糊逻辑程序设计语言的指称语义,主要包括:动态模糊程序设计语言的语义域、语义函数及其指称语义.最后给出了一个动态模糊程序设计语言的例子以观察程序的运行过程. 相似文献
7.
本文从基于信念修改的角度提出了两个AGENT之间的一个重复协商框架.在这个框架中,一个逻辑程序被当作一个协商的AGENT,每一个AGENT(逻辑程序)选择自己的一个回答作为自己最初的协商需求.两个AGENT之间的协商过程就是两个逻辑程序之间相互更新的过程,这个过程是通过协商的每一方接受对方的部分(或,全部)需求和放弃自己部分协商需求来实现的.本文设计了协商双方必须遵守的一些协商规则,根据这些规则对这个协商框架进行了形式化描述,并给出了协商的终止条件. 相似文献
8.
在介绍约束逻辑程序的相关概念的基础上,研究了简单单调约束逻辑程序约束原子的正文字前缀幂集展开方法,并证明展开后的正规逻辑约束与约束逻辑程序的等价特性.分析了正规逻辑程序的交替不动点良基模型建立的原理,将简单单调约束逻辑程序等价展开为与其等价的正规逻辑程序,以求展开后的逻辑程序中的给定算子的最小不动点为切入,给出了简单单调约束逻辑程序的交替不动点的良基模型.论证了文中提出的简单单调约束逻辑程序良基模型定义的合理性,说明把约束逻辑程序转化为正规逻辑程序是可行的. 相似文献
9.
10.
郭红建 《数字社区&智能家居》2014,(31)
该文提出了一种基于语义计算的聚类算法。通过计算词语的语义信息,从语义知识库获取词语的生成概率,构建文本的语义表征,将余弦夹角和相对熵等方法引入进行文本单元的语义相似度计算对比实验。实验结果表明,该文提出的算法效果较好。 相似文献
11.
In the Unix environment there are a number of debugging tools. None are universally popular and many programmers choose not to use any of them at all. In this paper we present a study of existing debuggers. We highlight a number of reasons why they may not be popular and we discuss some of the features that programmers look for in a debugging aid. Finally, we report on the development of a new screen-based interface, JDB, and some informal results of its usage. 相似文献
12.
K. Marriott L. Naish J. -L. Lassez 《Annals of Mathematics and Artificial Intelligence》1990,1(1-4):303-338
More specific versions of definite logic programs are introduced. These are versions of a program in which each clause is further instantiated or removed and which have an equivalent set of successful derivations to those of the original program, but a possibly increased set of finitely failed goals. They are better than the original program because failure in a non-successful derivation may be detected more quickly. Furthermore, information about allowed variable bindings which is hidden in the original program may be made explicit in a more specific version of it. This allows better static analysis of the program's properties and may reveal errors in the original program. A program may have several more specific versions but there is always a most specific version which is unique up to variable renaming. Methods to calculate more specific versions are given and it is characterized when they give the most specific version. 相似文献
13.
Debugging is one of the most time-consuming activities in program design. Work on automatic debugging has received a great deal of attention and there are a number of symposiums dedicated to this field. Automatic debugging is usually invoked when a test fails in one situation, but succeeds in another. For example, a test fails in one version of the program (or scheduler), but succeeds in another. Automatic debugging searches for the smallest difference that causes the failure. This is very useful when working to identify and fix the root cause of the bug.A new testing method instruments concurrent programs with schedule-modifying instructions to reveal concurrent bugs. This method is designed to increase the probability of concurrent bugs (such as races, deadlocks) appearing. This paper discusses integrating this new testing technology with automatic debugging. Instead of just showing that a bug exists, we can pinpoint its location by finding the minimal set of instrumentations that reveal the bug.In addition to explaining a methodology for this integration, we show an AspectJ-based implementation. We discuss the implementation in detail as it both demonstrates the advantage of the adaptability of open source tools and how our specific change can be used for other testing tools. 相似文献
14.
Resultants Semantics for Prolog 总被引:1,自引:0,他引:1
GABBRIELLI MAURIZIO; LEVI GIORGIO; MEO MARIA CHIARA 《Journal of Logic and Computation》1996,6(4):491-521
15.
The aim of this work is to develop a declarative semantics for N-Prolog with negation as failure. N-Prolog is an extension of Prolog proposed by Gabbay and Reyle (1984, 1985), which allows for occurrences of nested implications in both goals and clauses. Our starting point is an operational semantics of the language defined by means of top-down derivation trees. Negation as finite failure can be naturally introduced in this context. A goal-G may be inferred from a database if every top-down derivation of G from the database finitely fails, i.e., contains a failure node at finite height.Our purpose is to give a logical interpretation of the underlying operational semantics. In the present work (Part 1) we take into consideration only the basic problems of determining such an interpretation, so that our analysis will concentrate on the propositional case. Nevertheless we give an intuitive account of how to extend our results to a first order language. A full treatment of N-Prolog with quantifiers will be deferred to the second part of this work.Our main contribution to the logical understanding of N-Prolog is the development of a notion of modal completion for programs, or databases. N-Prolog deductions turn out to be sound and complete with respect to such completions. More exactly, we introduce a natural modal three-valued logic PK and we prove that a goal is derivable from a propositional program if and only if it is implied by the completion of the program in the logic PK. This result holds for arbitrary programs. We assume no syntactic restriction, such as stratification (Apt et al. 1988; Bonner and McCarty 1990). In particular, we allow for arbitrary recursion through negation.Our semantical analysis heavily relies on a notion of intensional equivalence for programs and goals. This notion is naturally induced by the operational semantics, and is preserved under substitution of equivalent subexpressions. Basing on this substitution property we develop a theory of normal forms of programs and goals. Every program can be effectively transformed into an equivalent program in normal form. From the simple and uniform structure of programs in normal form one may directly define the completion. 相似文献
16.
Writing and debugging distributed programs can be difficult. When a program is working, it can be difficult to achieve reasonable execution performance. A major cause of these difficulties is a lack of tools for the programmer. We use a model of distributed computation and measurement to implement a program monitoring system for programs running on the Berkeley UNIX 4.2BSD operating system. The model of distributed computation describes the activities of the processes within a distributed program in terms of computation (internal events) and communication (external events). The measurement model focuses on external events and separates the detection of external events, event record selection and data analysis. The implementation of the measurement tools involved changes to the Berkeley UNIX kernel, and the addition of daemon processes to allow the monitoring activity to take place across machine boundaries. A user interface has also been implemented. 相似文献
17.
Piero A. Bonatti 《Artificial Intelligence》2004,156(1):75-111
This paper illustrates extensively the theoretical properties, the implementation issues, and the programming style underlying finitary programs. They are a class of normal logic programs whose consequences under the stable model semantics can be effectively computed, despite the fact that finitary programs admit function symbols (hence infinite domains) and recursion. From a theoretical point of view, finitary programs are interesting because they enjoy properties that are extremely unusual for a nonmonotonic formalism, such as compactness. From the application point of view, the theory of finitary programs shows how the existing technology for answer set programming can be extended from problem solving below the second level of the polynomial hierarchy to all semidecidable problems. Moreover, finitary programs allow a more natural encoding of recursive data structures and may increase the performance of credulous reasoners. 相似文献
18.
H. E. Kulsrud 《Software》1974,4(3):241-249
This paper describes a method for collecting statistics about the use of compilers as part of the programming process. Results of an application of this method to a time-shared system are presented. A study of the data leads to conclusions about the relative numbers and computer times of compilations used for program writing, correcting and checking. A comparison of two languages surveyed is also made. 相似文献
19.
D. Ballis M. Falaschi C. Ferri J. Hernndez-Orallo M.J. Ramírez-Quintana 《Electronic Notes in Theoretical Computer Science》2003,86(3):85-104
Diagnosis methods in debugging aim at detecting bugs of a program, either by comparing it with a correct specification or by the help of an oracle (typically, the user herself). Debugging techniques for declarative programs usually exploit the semantical properties of programs (and specifications) and generally try to detect one or more “buggy” rules. In this way, rules are split apart in an absolute way: either they are correct or not. However, in many situations, not every error has the same consequences, an issue that is ignored by classical debugging frameworks. In this paper, we generalise debugging by considering a cost function, i.e. a function that assigns different cost values to each kind of error and different benefit values to each kind of correct response. The problem is now redefined as assigning a real-valued probability and cost to each rule, by considering each rule more or less “guilty” of the overall error and cost of the program. This makes possible to rank rules rather than only separate them between right and wrong. Our debugging method is also different from classical approaches in that it is probabilistic, i.e. we use a set of ground examples to approximate these rankings. 相似文献