首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents an approach to inductive synthesis of logic programs from examples using problem decomposition and problem reduction principles. This is in contrast to the prevailing logic program induction paradigm, which relies on generalization of programs from examples. The problem reduction is accomplished as a constrained top-down search process, which eventually is to reach trivial problems.Our induction scheme applies a distinguished logic programming language in which programs are combined from elementary predicates by means of combinators conceived of as problem reduction operators including list recursion forms. The operator form admits inductive synthesis as a top-down piecewise composition of semantically meaningful program elements according to the compositional semantics principle and with appeals neither to special generalization mechanisms nor to alternative forms of resolution and unification, or predicate invention.The search space is reduced by subjecting the induction process to various constraints concerning syntactical form, modes, data types, and computational resources. This is illustrated in the paper with well-modedness constraints with the aim of synthesising well-moded, procedurally acceptable programs.Preliminary experiments with the proposed induction method lead us to tentatively conclude that the presented approach forms a viable alternative to the prevailing inductive logic programming methods applying generalization from examples.  相似文献   

2.
3.
《Artificial Intelligence》2007,171(5-6):332-360
Temporal reasoning has always been a major test case for knowledge representation formalisms. In this paper, we develop an inductive variant of the situation calculus in ID-logic, classical logic extended with inductive definitions. This logic has been proposed recently and is an extension of classical logic. It allows for a uniform representation of various forms of definitions, including monotone inductive definitions and non-monotone forms of inductive definitions such as iterated induction and induction over well-founded posets. We show that the role of such complex forms of definitions is not limited to mathematics but extends to commonsense knowledge representation. In the ID-logic axiomatization of the situation calculus, fluents and causality predicates are defined by simultaneous induction on the well-founded poset of situations. The inductive approach allows us to solve the ramification problem for the situation calculus in a uniform and modular way. Our solution is among the most general solutions for the ramification problem in the situation calculus. Using previously developed modularity techniques, we show that the basic variant of the inductive situation calculus without ramification rules is equivalent to Reiter-style situation calculus.  相似文献   

4.
5.
An obfuscation is a behaviour-preserving program transformation whose aim is to make a program “harder to understand”. Obfuscations are mainly applied to make reverse engineering of object-oriented programs more difficult. In this paper, we propose a fresh approach by obfuscating abstract data-types allowing us to develop structure-dependent obfuscations that would otherwise (traditionally) not be available. We regard obfuscation as data refinement enabling us to produce equations for proving correctness. We model the data-type operations as functional programs making our proofs easy to construct.We show how we can generalise an imperative obfuscation - an array split - so that we can apply it to abstract data-types and we give specific examples for lists and matrices. We develop a theorem which allows us, under certain conditions, to produce obfuscated operations directly. Our approach also allows us to produce random obfuscations and we give an example for our list data-type.  相似文献   

6.
This paper outlines a logic programming methodology which applies standardized logic program recursion forms afforded by a system of general purpose recursion schemes. The recursion schemes are conceived of as quasi higher-order predicates which accept predicate arguments, thereby representing parameterized program modules. This use of higher-order predicates is analogous to higher-order functionals in functional programming. However, these quasi higher-order predicates are handled by a metalogic programming technique within ordinary logic programming. Some of the proposed recursion operators are actualizations of mathematical induction principles (e.g. structural induction as generalization of primitive recursion). Others are heuristic schemes for commonly occurring recursive program forms. The intention is to handle all recursions in logic programs through the given repertoire of higher-order predicates. We carry out a pragmatic feasibility study of the proposed recursion operators with respect to the corpus of common textbook logic programs. This pragmatic investigation is accompanied with an analysis of the theoretical expressivity. The main theoretical results concerning computability are
  1. Primitive recursive functions can be re-expressed in logic programming by predicates defined solely by non-recursive clauses augmented with afold recursion predicate akin to the fold operators in functional programming.
  2. General recursive functions can be re-expressed likewise sincefold allows re-expression of alinrec recursion predicate facilitating linear, unbounded recursion.
  相似文献   

7.
8.
This paper investigates a novel fuzzy control approach developed for a class of nonlinear continuous systems. We combine an input–output feedback linearization (IOFL) method and a gain scheduling (GS) approach to obtain a tracking control structure. The latter is mainly based on the reversing trajectory method which allows us to estimate the asymptotic stability region around operating points. Needless to say that the limitation of this analytical approach lies in the challenge of determining the intermediate operating points in order to ensure a smooth transition from actual operating conditions to desired ones. This happens when there is no intersection between successive stability regions. The proposed fuzzy logic controller (FLC) is synthesized in order to determine the intermediate operating points which, in turn, will allows us to online implement the tracking control for nonlinear systems. Finally, the effectiveness of the fuzzy gain scheduling schema is demonstrated through its simulation to a temperature control problem in a CSTR.  相似文献   

9.
We enhance the narrowing-driven partial evaluation scheme for lazy functional logic programs with the computation of symbolic costs. The enhanced scheme allows us to estimate the effects of the program transformer in a precise framework and, moreover, to quantify these effects. The considered costs are “symbolic” in the sense that they measure the number of basic operations performed during a computation rather than actual execution times. Our scheme may serve as a basis to develop speedup analyses and cost-guided transformers. A cost-augmented partial evaluator, which demonstrates the usefulness of our approach, has been implemented in the multi-paradigm language Curry.  相似文献   

10.
Model Checking Dynamic Memory Allocation in Operating Systems   总被引:1,自引:0,他引:1  
Most system software, including operating systems, contains dynamic data structures whose shape and contents should satisfy design requirements during execution. Model checking technology, a powerful tool for automatic verification based on state exploration, should be adapted to deal with this kind of structure. This paper presents a method to specify and verify properties of C programs with dynamic memory management. The proposal contains two main contributions. First, we present a novel method to extend explicit model checking of C programs with dynamic memory management. The approach consists of defining a canonical representation of the heap, moving most of the information from the state vector to a global structure. We provide a formal semantics of the method that allows us to prove the soundness of the representation. Secondly, we combine temporal LTL and CTL logic to define a two-dimensional logic, in time and space, which is suitable to specify complex properties of programs with dynamic data structures. We also define the model checking algorithms for this logic. The whole method has been implemented in the well known model checker SPIN, and illustrated with an example where a typical memory reader/writer driver is modelled and analyzed.  相似文献   

11.
Physics-based animation programs can often be modeled in terms of hybrid automata. A hybrid automaton includes both discrete and continuous dynamical variables. The discrete variables define the automaton’s modes of behavior. The continuous variables are governed by mode-dependent differential equations. This paper describes a system for specifying and automatically synthesizing physics-based animation programs based on hybrid automata. The system presents a program developer with a family of parameterized specification schemata. Each schema describes a pattern of behavior as a hybrid automaton passes through a sequence of modes. The developer specifies a program by selecting one or more schemata and supplying application-specific instantiation parameters for each of them. Each schema is associated with a set of axioms in a logic of hybrid automata. The axioms serve to document the semantics of the specification schema. Each schema is also associated with a set of implementation rules. The rules synthesize program components implementing the specification in a general physics-based animation architecture. The system allows animation programs to be developed and tested in an incremental manner. The system itself can be extended to incorporate additional schemata for specifying new patterns of behavior, along with new sets of axioms and implementation rules. It has been implemented and tested on over a dozen examples. We believe this research is a significant step toward a specification and synthesis system that is flexible enough to handle a wide variety of animation programs, yet restricted enough to permit programs to be synthesized automatically.  相似文献   

12.
Summary The purpose of this work is to show a point of view upon the notions of program-substitution and admissibility of rules which are the tools for proving properties of programs of algorithmic logic and the so-called extended algorithmic logic with quantifiers and with non-deterministic programs. We prove that the set of theses of algorithmic logic is closed under each program-substitution. This substitution rule allows us to formulate a problem of algorithmic structural completeness as a question about derivability of all structural, finitary and admissible rules. We prove the incompleteness of algorithmic logic strengthened by the substitution rule and its algorithmically structural completeness. This work was supported by the Polish Academy of Sciences CPBP 08–15.  相似文献   

13.
董传良  陆嘉恒  杨虹  董玮文 《软件学报》2001,12(11):1716-1726
面向对象数据库的许多应用环境需要频繁的模式演以化,但模式演化以后,基于先前模式的应用程序因此而不得不修改或重编,这就造成了巨大的软件浪费.提出了基于路径无关语言的等价模式演化方案来解决这个问题.首先,路径无关语言是一种面向对象数据库的编程语言,它能使程序脱离对细节数据模式的导航,对模式演化具有较强的适应性.而等价模式演化是一种新的模式演化方案,它能保证用路径无关语言编写的应用程序在模式演化以后无须修改而完全重用.此外,在实现等价模式演化的系统中,为了减少演化开销以及不增加用户的额外编程负担,提出了虚拟关系机制和对象演化技术.  相似文献   

14.
Structured modeling language (SML) is a modeling language for the structured modeling framework, which represents the semantics as well as mathematical structure of a model. This paper extends some structured modeling concepts and defines SML schema operations for formalizing model integration. Executing any of the operations could possibly disrupt the integrity of an SML model schema. Some major propositions and several examples in the paper settle practically many of the open questions for the operations studied. This research not only contributes an operational approach to perform model integration in SML, but also allows us to understand how different kinds of schema edits can disrupt the formal correctness of SML, and what can be done about it. It helps lay the foundation for the future development of an incremental static semantic analyzer and smart schema-directed editor.  相似文献   

15.
16.
This paper presents a new interpretation of intuitionistic fuzzy sets in the framework of the Dempster–Shafer theory of evidence (DST). This interpretation makes it possible to represent all mathematical operations on intuitionistic fuzzy values as the operations on belief intervals. Such approach allows us to use directly the Dempster’s rule of combination to aggregate local criteria presented by intuitionistic fuzzy values in the decision making problem. The usefulness of the developed method is illustrated with the known example of multiple criteria decision making problem. The proposed approach and a new method for interval comparison based on DST, allow us to solve multiple criteria decision making problem without intermediate defuzzification when not only criteria, but their weights are intuitionistic fuzzy values.  相似文献   

17.
Function and logic programming languages are understood as terms, resp. formulas, of a first order theory. This theory gives meaning to programs and allows reasoning about programs within full predicate logic possibly using quantifiers and induction. The operational semantics of programming languages is given by deductively restricted subtheories of the meaning theory in such a way that the computation sequences are in a one-to-one correspondence with proofs in subtheories. Moreover, meaning is invariant to computations as everything provable in a subtheory is required to be a theorem of the meaning theory. The questions of deadlocks and termination of programs are thus reduced to the proof-theoretical questions of existence of proofs in the subtheories.  相似文献   

18.
The basic idea of this approach is to use data access and manipulation functions in data definition, such that testing a given individual data object on its conformance to data definition is done by running a (finally boolean) procedure against it. In essence, schema entries (i.e. definitions, declarations, etc.) are viewed as expressions of predicate logic where the individuals are obtained by the execution of data manipulation operations which include a rather general information selection technique or conceptual access method.Selection of information constructs in databases usually adopts a technique closely tailored to the specific “data model”. It is one of the intentions of this paper to demonstrate the common principles behind the variety of selection techniques by a uniform approach which comprises the selection features of most of the database management systems and makes them comparable.At the level of information structure a kind of “geography” is introduced into a database, which allows to distinguish the same information construct (record, file, segment, item, coset, tuple,…) in distinct information contexts or at distinct conceptual locations called “spots”. By definition, every spot (“construct in/with context”) exists only once in a given database. In combination with some basic operators a logical addressing mechanism has been designed, which follows the context of a construct to identify it within this very context. This algorithm turns out to be a general vehicle to locate a construct at a spot, independently of whether the database has a relational, network, hierarchical or any other appearance.The method of context directed addressing along with pertinent operators allows in a very general way—i.e. neither biased nor restricted to a “data model”—to define types of information constructs and of construct transitions as is required in a conceptual community schema. This is demonstrated through examples of schema entries with rather complex cross-consistency conditions and additional transition rules called persistency conditions. The examples also intend to give an idea of the minimum support to be expected from any future conceptual schema language.  相似文献   

19.
We revisit the issue of epistemological and semantic foundations for autoepistemic and default logics, two leading formalisms in nonmonotonic reasoning. We develop a general semantic approach to autoepistemic and default logics that is based on the notion of a belief pair and that exploits the lattice structure of the collection of all belief pairs. For each logic, we introduce a monotone operator on the lattice of belief pairs. We then show that a whole family of semantics can be defined in a systematic and principled way in terms of fixpoints of this operator (or as fixpoints of certain closely related operators). Our approach elucidates fundamental constructive principles in which agents form their belief sets, and leads to approximation semantics for autoepistemic and default logics. It also allows us to establish a precise one-to-one correspondence between the family of semantics for default logic and the family of semantics for autoepistemic logic. The correspondence exploits the modal interpretation of a default proposed by Konolige. Our results establish conclusively that default logic can be viewed as a fragment of autoepistemic logic, a result that has been long anticipated. At the same time, they explain the source of the difficulty to formally relate the semantics of default extensions by Reiter and autoepistemic expansions by Moore. These two semantics occupy different locations in the corresponding families of semantics for default and autoepistemic logics.  相似文献   

20.
We are interested in separation-logic-based static analysis of programs that use shared mutable data structures. In this paper, we introduce backward and forward analysis for a separation logic called BIμνBIμν, an extension of separation logic [Ishtiaq and O’Hearn, BI as an assertion language for mutable data structures, in: POPL’01, 2001, pp. 14–26], to which we add fixpoint connectives and postponed substitution. This allows us to express recursive definitions within the logic as well as the axiomatic semantics of while statements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号