共查询到20条相似文献,搜索用时 0 毫秒
1.
Certain tasks, such as formal program development and theorem proving, fundamentally rely upon the manipulation of higher-order objects such as functions and predicates. Computing tools intended to assist in performing these tasks are at present inadequate in both the amount of knowledge they contain (i.e., the level of support they provide) and in their ability to learn (i.e., their capacity to enhance that support over time). The application of a relevant machine learning technique—explanation-based generalization (EBG)—has thus far been limited to first-order problem representations. We extend EBG to generalize higher-order values, thereby enabling its application to higher-order problem encodings.Logic programming provides a uniform framework in which all aspects of explanation-based generalization and learning may be defined and carried out. First-order Horn logics (e.g., Prolog) are not, however, well suited to higher-order applications. Instead, we employ Prolog, a higher-order logic programming language, as our basic framework for realizing higher-order EBG. In order to capture the distinction between domain theory and training instance upon which EBG relies, we extend Prolog with the necessity operator of modal logic. We develop a meta-interpreter realizing EBG for the extended language, Prolog, and provide examples of higher-order EBG. 相似文献
2.
Coupled transformations occur in software evolution when multiple artifacts must be modified in such a way that they remain consistent with each other. An important example involves the coupled transformation of a data type, its instances, and the programs that consume or produce it. Previously, we have provided a formal treatment of transformation of the first two: data types and instances. The treatment involved the construction of type-safe, type-changing strategic rewrite systems. In this paper, we extend our treatment to the transformation of corresponding data processing programs.The key insight underlying the extension is that both data migration functions and data processors can be represented type-safely by a generalized abstract data type (GADT). These representations are then subjected to program calculation rules, harnessed in type-safe, type-preserving strategic rewrite systems. For ease of calculation, we use point-free representations and corresponding calculation rules.Thus, coupled transformations are carried out in two steps. First, a type-changing rewrite system is applied to a source type to obtain a target type together with (representations of) migration functions between source and target. Then, a type-preserving rewrite system is applied to the composition of a migration function and a data processor on the source (or target) type to obtain a data processor on the target (or source) type. All rewrites are type-safe. 相似文献
3.
In this paper, we present the implementation in Tom of a de Bruijn indices generalization allowing the representation of term-graphs over an algebraic signature. By adding pattern matching and traversal controls to Java, Tom is a well-suited environment for defining program transformations or analyses. As some analyses, e.g. based on control flow, require graph-like structures, the use of this formalism is a natural way of expressing them by graph rewriting. 相似文献
4.
We present a new program transformation strategy based on the introduction of lists. This strategy is an extension of the
tupling strategy which is based on the introduction of tuples of fixed length. The list introduction strategy overcomes some
of the limitations of the tupling strategy and, in particular, it makes it possible to transform general recursive programs
into linear recursive ones also in cases when this transformation cannot be performed by the tupling strategy. The linear
recursive programs we derive by applying the list introduction strategy have in most cases very good time and space performance
because they avoid repeated evaluations of goals and unnecessary constructions of data structures.
Received December 2000 / Accepted in revised form January 2002 相似文献
5.
《The Journal of Logic and Algebraic Programming》2014,83(1):53-63
Tail recursive functions are a special kind of recursive functions where the last action in their body is the recursive call. Tail recursion is important for a number of reasons (e.g., they are usually more efficient). In this article, we introduce an automatic transformation of first-order functions into tail recursive form. Functions are defined using a (first-order) term rewrite system. We prove the correctness of the transformation for constructor-based reduction over constructor systems (i.e., typical first-order functional programs). 相似文献
6.
We present a generic scheme for the declarative debugging of programs that are written in rewriting-based languages that are equipped with narrowing. Our aim is to provide an integrated development environment in which it is possible to debug a program and then correct it automatically. Our methodology is based on the combination (in a single framework) of a semantics-based diagnoser that identifies those parts of the code that contain errors and an inductive learner that tries to repair them, once the bugs have been located in the program. We develop our methodology in several steps. First, we associate with our programs a semantics that is based on a (continuous) immediate consequence operator, TR, which models the answers computed by narrowing and is parametric w.r.t. the evaluation strategy, which can be eager or lazy. Then, we show that, given the intended specification of a program R, it is possible to check the correctness of R by a single step of TR. In order to develop an effective debugging method, we approximate the computed answers semantics of R and derive a finitely terminating bottom-up abstract diagnosis method, which can be used statically. Finally, a bug-correction program synthesis methodology attempts to correct the erroneous components of the wrong code. We propose a hybrid, top-down (unfolding-based) as well as bottom-up (induction-based), correction approach that is driven by a set of evidence examples which are automatically produced as an outcome by the diagnoser. The resulting program is proven to be correct and complete w.r.t. the considered example sets. Our debugging framework does not require the user to provide error symptoms in advance or to answer difficult questions concerning program correctness. An implementation of our debugging system has been undertaken which demonstrates the workability of our approach. 相似文献
7.
8.
An important step in many compilers for functional languages is lambda lifting. In his thesis, Hughes showed that by doing lambda lifting in a particular way, a useful property called full laziness can be preserved. Full laziness has been seen as intertwined with lambda lifting ever since. We show that, on the contrary, full laziness can be regarded as a completely separate process to lambda lifting, thus making it easy to use different lambda lifters following a full-laziness transformation, or to use the full-laziness transformation in compilers which do not require lambda lifting. On the way, we present the complete code for our modular fully-lazy lambda lifter, written in the HASKELL functional programming language. 相似文献
9.
Population initialisation in genetic programming is both easy, because random combinations of syntax can be generated straightforwardly,
and hard, because these random combinations of syntax do not always produce random and diverse program behaviours. In this
paper we perform analyses of behavioural diversity, the size and shape of starting populations, the effects of purely semantic
program initialisation and the importance of tree shape in the context of program initialisation. To achieve this, we create
four different algorithms, in addition to using the traditional ramped half and half technique, applied to seven genetic programming
problems. We present results to show that varying the choice and design of program initialisation can dramatically influence
the performance of genetic programming. In particular, program behaviour and evolvable tree shape can have dramatic effects
on the performance of genetic programming. The four algorithms we present have different rates of success on different problems.
相似文献
Colin G. JohnsonEmail: |
10.
The refinement calculus is a well-established theory for deriving program code from specifications. Recent research has extended the theory to handle timing requirements, as well as functional ones, and we have developed an interactive programming tool based on these extensions. Through a number of case studies completed using the tool, this paper explains how the tool helps the programmer by supporting the many forms of variables needed in the theory. These include simple state variables as in the untimed calculus, trace variables that model the evolution of properties over time, auxiliary variables that exist only to support formal reasoning, subroutine parameters, and variables shared between parallel processes. 相似文献
11.
Coupled transformation occurs when multiple software artifacts must be transformed in such a way that they remain consistent with each other. For instance, when a database schema is adapted in the context of system maintenance, the persistent data residing in the system's database needs to be migrated to conform to the adapted schema. Also, queries embedded in the application code and any declared referential constraints must be adapted to take the schema changes into account. As another example, in XML-to-relational data mapping, a hierarchical XML Schema is mapped to a relational SQL schema with appropriate referential constraints, and the XML documents and queries are converted into relational data and relational queries. The 2LT project is aimed at providing a formal basis for coupled transformation. This formal basis is found in data refinement theory, point-free program calculation, and strategic term rewriting. We formalize the coupled transformation of a data type by an algebra of information-preserving data refinement steps, each witnessed by appropriate data conversion functions. Refinement steps are modeled by so-called two-level rewrite rules on type expressions that synthesize conversion functions between redex and reduct while rewriting. Strategy combinators are used to composed two-level rewrite rules into complete rewrite systems. Point-free program calculation is applied to optimized synthesize conversion function, to migrate queries, and to normalize data type constraints. In this paper, we provide an overview of the challenges met by the 2LT project and we give a sketch of the solutions offered. 相似文献
12.
13.
Miguel Gmez-Zamalloa Elvira Albert Germn Puebla 《Information and Software Technology》2009,51(10):1409-1427
Reasoning about Java bytecode (JBC) is complicated due to its unstructured control-flow, the use of three-address code combined with the use of an operand stack, etc. Therefore, many static analyzers and model checkers for JBC first convert the code into a higher-level representation. In contrast to traditional decompilation, such representation is often not Java source, but rather some intermediate language which is a good input for the subsequent phases of the tool. Interpretive decompilation consists in partially evaluating an interpreter for the compiled language (in this case JBC) written in a high-level language with respect to the code to be decompiled. There have been proofs-of-concept that interpretive decompilation is feasible, but there remain important open issues when it comes to decompile a real language such as JBC. This paper presents, to the best of our knowledge, the first modular scheme to enable interpretive decompilation of a realistic programming language to a high-level representation, namely of JBC to Prolog. We introduce two notions of optimality which together require that decompilation does not generate code more than once for each program point. We demonstrate the impact of our modular approach and optimality issues on a series of realistic benchmarks. Decompilation times and decompiled program sizes are linear with the size of the input bytecode program. This demonstrates empirically the scalability of modular decompilation of JBC by partial evaluation. 相似文献
14.
15.
Recent proposals for multi-paradigm declarative programming combine the most important features of functional, logic and concurrent programming into a single framework. The operational semantics of these languages is usually based on a combination of narrowing and residuation. In this paper, we introduce a non-standard, residualizing semantics for multi-paradigm declarative programs and prove its equivalence with a standard operational semantics. Our residualizing semantics is particularly relevant within the area of program transformation where it is useful, e.g., to perform computations during partial evaluation. Thus, the proof of equivalence is a crucial result to demonstrate the correctness of (existing) partial evaluation schemes. 相似文献
16.
17.
A system has been implemented that enables students on a programming course to hand in programs for assessment without the need for printed listings to be generated. The system test runs all submitted programs using standard sets of data, and then facilitates the marking of these programs by the staff teaching the course. The implementation of this system is described, and the experience gained from using it is discussed. 相似文献
18.
In Gori [An abstract interpretation framework to reason on finite failure and other properties of finite and infinite computations, Theoret. Comput. Sci. 290(1) (2003) 863-936] a new fixpoint semantics which correctly models finite failure has been defined. This semantics is And-compositional, compositional w.r.t. instantiation and is based on a co-continuous operator. Based on this fixpoint semantics a new inductive method able to verify a program w.r.t. the property of finite failure can be defined. In this paper we show how Ferrand's approach, using both a least fixpoint and greatest fixpoint semantics, can be adapted to finite failure. The verification method is not effective. Therefore, we consider an approximation from above and an approximation from below of our semantics, which give two different finite approximations. These approximations are used for effective program verification. 相似文献
19.
This paper describes a practical application of transformation-based analysis and code generation. An overview is given of an approach for automatically constructing Java stress tests whose execution exercises all “interesting” class initialization sequence possibilities for a given class hierarchy. 相似文献
20.
From Ukkonen to McCreight and Weiner: A Unifying View of Linear-Time Suffix Tree Construction 总被引:6,自引:0,他引:6
We review the linear-time suffix tree constructions by Weiner, McCreight, and Ukkonen. We use the terminology of the most
recent algorithm, Ukkonen's on-line construction, to explain its historic predecessors. This reveals relationships much closer
than one would expect, since the three algorithms are based on rather different intuitive ideas. Moreover, it completely explains
the differences between these algorithms in terms of simplicity, efficiency, and implementation complexity.
Received February 12, 1995; revised January 28, 1996. 相似文献