首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Program specialization is a program transformation methodology which improves program efficiency by exploiting the information about the input data which are available at compile time. We show that current techniques for program specialization based on partial evaluation do not perform well on nondeterministic logic programs. We then consider a set of transformation rules which extend the ones used for partial evaluation, and we propose a strategy for guiding the application of these extended rules so to derive very efficient specialized programs. The efficiency improvements which sometimes are exponential, are due to the reduction of nondeterminism and to the fact that the computations which are performed by the initial programs in different branches of the computation trees, are performed by the specialized programs within single branches. In order to reduce nondeterminism we also make use of mode information for guiding the unfolding process. To exemplify our technique, we show that we can automatically derive very efficient matching programs and parsers for regular languages. The derivations we have performed could not have been done by previously known partial evaluation techniques.A preliminary version of this paper appears as: Reducing Nondeterminism while Specializing Logic Programs. Proceedings of the 24th Annual ACM Symposium on Principles of Programming Languages, Paris, France, January 15–17, 1997, ACM Press, 1997, pp. 414–427.  相似文献   

2.
In component-based software development, gluing of two software components is usually achieved by defining an interface specification, and creating wrappers on components to support the interface. We believe that interface specification provides useful information for specializing components. An interface may define constraints on a component's inputs, as well as on its outputs. In this paper, we propose a new approach to program specialization with respect to output constraints. We provide the form in which an efficient specialized program should be after such specialization, and consider a variant of partial evaluation to achieve it. In the process, we translate an output constraint into a characterization function for a component's input, and define a specializer that uses this characterization to guide the specialization process. We believe this work will broaden the scope of program specialization, and provide a framework for building more generic and versatile program adaptation techniques.  相似文献   

3.
Program specialization can divide a computation into several computation stages. This paper investigates the theoretical limitations and practical problems of standard specialization tools, presents multi-level specialization, and demonstrates that, in combination with the cogen approach, it is far more practical than previously supposed. The program generator which we designed and implemented for a higher-order functional language converts programs into very compact multi-level generating extensions that guarantee fast successive specialization. Experimental results show a remarkable reduction of generation time and generator size compared to previous attempts of multi-level specialization by self-application. Our approach to multi-level specialization seems well-suited for applications where generation time and program size are critical.  相似文献   

4.
This article presents a hybrid method of partial evaluation (PE), which is exactly as precise as naive online PE and nearly as efficient as state-of-the-art offline PE, for a statically typed call-by-value functional language.PE is a program transformation that specializes a program with respect to a subset of its input by reducing the program and leaving a residual program. Online PE makes the reduction/residualization decision during specialization, while offline PE makes it before specialization by using a static analysis called binding-time analysis. Compared to offline PE, online PE is more precise in the sense that it finds more redexes, but less efficient in the sense that it takes more time.To solve this dilemma, we begin with a naive online partial evaluator, and make it efficient without sacrificing its precision. To this end, we (1) use state (instead of continuations) for let-insertion, (2) take a so-called cogen approach (instead of self-application), and (3) remove unnecessary let-insertion, unnecessary tags, and unnecessary values/expressions by using a type-based representation analysis, which subsumes various monovariant binding-time analyses.We implemented and compared our method and existing methods—both online and offline—in a subset of Standard ML. Experiments showed that (1) our method produces as fast residual programs as online PE and (2) it does so at least twice as fast as other methods (including a cogen approach to offline PE with a polyvariant binding-time analysis) that produce comparable residual programs.  相似文献   

5.
Interpretation and run-time compilation techniques are increasingly important because they can support heterogeneous architectures, evolving programming languages, and dynamically-loaded code. Interpretation is simple to implement, but yields poor performance. Run-time compilation yields better performance, but is costly to implement. One way to preserve simplicity but obtain good performance is to apply program specialization to an interpreter in order to generate an efficient implementation of the program automatically. Such specialization can be carried out at both compile time and run time.Recent advances in program-specialization technology have significantly improved the performance of specialized interpreters. This paper presents and assesses experiments applying program specialization to both bytecode and structured-language interpreters. The results show that for some general-purpose bytecode languages, specialization of an interpreter can yield speedups of up to a factor of four, while specializing certain structured-language interpreters can yield performance comparable to that of an implementation in a general-purpose language, compiled using an optimizing compiler.  相似文献   

6.
CILinear:一个线性不变式自动构造工具   总被引:2,自引:0,他引:2  
构造不变式是程序验证的重要组成部分,而开源工具Interpro。能对简单的程序设计语言构造线性不变式。基于Interproc和C程序编译工具CII,针对简化的C程序设计并实现了自动构造数值型程序变量线性不变式的工具CILinear,并与Interproc进行了比较。实验表明CILinear能有效地构造线性不变式,并且比Interproc支持的语法更多。通过实例讨论了CILinear在程序验证中的实际应用。  相似文献   

7.
Program understanding can be assisted by tools that match patterns in the program source. Lexical pattern matchers provide excellent performance and ease of use, but have a limited vocabulary. Syntactic matchers provide more precision, but may sacrifice performance, robustness, or power. To achieve more of the benefits of both models, we extend the pattern syntax of AWK to support matching of abstract syntax trees, as demonstrated in a tool called TAWK. Its pattern syntax is language‐independent, based on abstract tree patterns. As in AWK, patterns can have associated actions, which in TAWK are written in C for generality, familiarity, and performance. The use of C is simplified by high‐level libraries and dynamic linking. To allow processing of program files containing non‐syntactic constructs such as textual macros, mechanisms have been designed that allow matching of ‘language‐like’ macros in a syntactic fashion. We survey and apply prototypical approaches to concretely demonstrate the tradeoffs in program processing. Our results indicate that TAWK can be used to quickly and easily perform a variety of common software engineering tasks, and the extensions to accommodate non‐syntactic features significantly extend the generality of syntactic matchers. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

8.
Program and data specialization have always been studied separately, although they are both aimed at processing early computations. Program specialization encodes the result of early computations into a new program; while data specialization encodes the result of early computations into data structures.In this paper, we present an extension of the Tempo specializer, which performs both program and data specialization. We show how these two strategies can be integrated in a single specializer. This new kind of specializer provides the programmer with complementary strategies which widen the scope of specialization. We illustrate the benefits and limitations of these strategies and their combination on a variety of programs.  相似文献   

9.
PICASSO (PICture Aided Sophisticated Sketch Of database queries) is a graphics-based database query language designed for use with a universal relation database system. The primary objective of PICASSO is ease of use. Graphics are used to provide a simple method of expressing queries and to provide visual feedback to the user about the system's interpretation of the query. Inexperienced users can use the graphical feedback to aid them in formulating queries whereas experienced users can ignore the feedback. Inexperienced users can pose queries without knowing the details of underlying database schema and without learning the formal syntax of SQL-like query language. This paper presents the syntax of PICASSO queries and compares PICASSO queries with similar queries in standard relational query languages. Comparisons are also made with System/U, a non-graphical universal relation system on which PICASSO is based. The hypergraph semantics of the universal relation are used as the foundation for PICASSO and their integration with a graphical workstation enhances the usability of database systems.  相似文献   

10.
本文描述了流图语言的自应用型静态部分求值器,它由活跃变量分析、抽象分析、标记和例化4部分组成.在活跃变量分析基础上再作抽象分析,比以往的抽象分析获得的抽象解释更精确,也更利于产生较高质量的剩余程序.转移压缩在例化中直接进行.  相似文献   

11.
A generating extension of a program specializes the program with respect to part of the input. Applying a partial evaluator to the program trivially yields a generating extension, but specializing the partial evaluator with respect to the program often yields a more efficient one. This specialization can be carried out by the partial evaluator itself; in this case, the process is known as the second Futamura projection.We derive an ML implementation of the second Futamura projection for Type-Directed Partial Evaluation (TDPE). Due to the differences between traditional, syntax-directed partial evaluation and TDPE, this derivation involves several conceptual and technical steps. These include a suitable formulation of the second Futamura projection and techniques for making TDPE amenable to self-application. In the context of the second Futamura projection, we also compare and relate TDPE with conventional off-line partial evaluation.We demonstrate our technique with several examples, including compiler generation for Tiny, a prototypical imperative language.  相似文献   

12.
Specification mining takes execution traces as input and extracts likely program invariants, which can be used for comprehension, verification, and evolution related tasks. In this work we integrate scenario-based specification mining, which uses a data-mining algorithm to suggest ordering constraints in the form of live sequence charts, an inter-object, visual, modal, scenario-based specification language, with mining of value-based invariants, which detects likely invariants holding at specific program points. The key to the integration is a technique we call scenario-based slicing, running on top of the mining algorithms to distinguish the scenario-specific invariants from the general ones. The resulting suggested specifications are rich, consisting of modal scenarios annotated with scenario-specific value-based invariants, referring to event parameters and participating object properties. We have implemented the mining algorithm and the visual presentation of the mined scenarios within a standard development environment. An evaluation of our work over a number of case studies shows promising results in extracting expressive specifications from real programs, which could not be extracted previously. The more expressive the mined specifications, the higher their potential to support program comprehension and testing.  相似文献   

13.
Partially evaluating a procedural program amounts to building a series of mutually-recursive specialized procedures. When a procedure call in the source program gets specialized into a residual call, the called procedure needs to be processed to occur in the residual program. Because the order of procedure definitions in the residual program is immaterial, it does not matter in which order these two events — building the residual call and building the residual procedure — are scheduled. Therefore, partial evaluation offers a basic opportunity for an MIMD type of parallelism with shared global memory where in essence, the mutually-recursive specialized procedures are built in parallel as specialization points are met and the relation binding source and residual procedures is globalized, to preserve its uniqueness.We have translated a sequential partial evaluator written in T (a dialect of Scheme) into Mul-T (a parallel extension of T) by adding one semaphore with each specialization point and one future to construct the residual procedure in parallel with the current specialization. The resulting parallel partial evaluator has been observed to be faster than the sequential one in proportion to the size of the source program and to the number of specialized procedures in the residual program.Our sequential partial evaluator is self-applicable. Because the semaphores and the future are run-time operations, our parallel partial evaluator is still self-applicable. In principle it can be and in practice we have used it to generate parallel compilers, i.e., specializers dedicated to an interpreter and processing its static and dynamic semantics in parallel, non trivially. Again, parallelism in dedicated specializers is determined by the size of the source program and the number of specialized procedures in the residual program.This work was supported by Darpa under Grant N00014-88-k-0573.This work was carried out during a summer visit to Yale University in 1990. This paper was written at Kansas State University and completed during a spring visit at Carnegie Mellon University in 1993.  相似文献   

14.
15.
It is our habit in writing an English composition that, as we write each word, each phrase, each sentence, and each paragraph, we consciously or unconsciously check the syntax and the semantics of the composition just written. Writing a computer program in a high-level language could be made similar to writing a composition in English. In this case, a highly interactive highlevel language system checks the syntax and the semantics of the highlevel language program as each symbol, each expression, and each statement are being entered at the terminal. When the source program is completely entered, the program could have been debugged and could have run once.  相似文献   

16.
The language of universal algebras is used as a model for programming language specification. BNF rules are employed for specifying the signature of the language algebra instead of the context free syntax. The algorithm for program evaluation is inductively defined by the following universal algebraic construction:
Any function defined on the generators of a free algebra taking values in the carrier of another similar algebra can be uniquely extended to a homomorphism between the two algebras.

Any conventional programming language can be specified by a finite set of BNF rules and its algebra of symbols is generated by a finite set of generator classes. Thus any function defined on the finite set of generators offers an algebraic mechanism for a universal algorithm for source language program evaluation.  相似文献   


17.
A program language can be defined as the language in which computer programs are written, and a programming language as the language used by the programmer to create programs. This paper presents the design of an interactive program development system which uses Pascal as both program and programming language. Principal properties of the system are a complete immediate syntax check, a program-structure oriented editor, incremental compiling techniques, and interactive interpretation and debugging of programs. The syntax check is split into three phases, and the user can change the degree of check wanted. After a change of the program only part of it is recompiled, and only necessary phases of the compiling process are performed.  相似文献   

18.
宏程序是现代高性能数控系统中的强大功能之一。联合运用它和CNC系统中的其它功能,能够完成诸如孔群之类的复杂加工,而且对用户来说,编程非常简单。本文采用巴科斯范式(BNF)定义了CNC系统中宏程序设计语言文法。在此基础上,设计了宏程序的词法分析、语法分析及解释执行算法和程序。最后分析了一个应用实例。  相似文献   

19.
Datatype specialization is a form of subtyping that captures program invariants on data structures that are expressed using the convenient and intuitive datatype notation. Of particular interest are structural invariants such as well-formedness. We investigate the use of phantom types for describing datatype specializations. We show that it is possible to express statically-checked specializations within the type system of Standard ML. We also show that this can be done in a way that does not lose useful programming facilities such as pattern matching in case expressions.  相似文献   

20.
刘树锟  阳小华  刘杰 《计算机工程与设计》2007,28(18):4536-4538,4545
通过程序不变量动态生成技术,可以分析程序内部的关联属性,从而有助于设计高质量的程序代码以及规范化的程序架构.主要描述基于契约的似然程序不变量发现的基本理论模型,并结合Java建模语言进一步阐明程序断言动态生成技术及其现在产生的重点问题,针对提出的问题给出了相应的解决方法.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号