首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
J. W. Hughes  M. S. Powell 《Software》1983,13(12):1099-1112
DTL is an experimental programming language which has developed from an investigation of data structured design methods and data driven programming techniques. A DTL program is derived from a specification of the structure of its valid input and output languages. The program function is defined as a translation between these languages. A complex translation can be hierarchically structured into a network of simpler translations by stepwise refinement.  相似文献   

2.
The advantage of COOZ(Complete Object-Oriented Z) is to specify large scale software,but it does not support refinement calculus.Thus its application is comfined for software development.Including refinement calculus into COOZ overcomes its disadvantage during design and implementation.The separation between the design and implementation for structure and notation is removed as well .Then the software can be developed smoothly in the same frame.The combination of COOZ and refinement calculus can build object-oriented frame,in which the specification in COOZ is refined stepwise to code by calculus.In this paper,the development model is established.which is based on COOZ and refinement calculus.Data refinement is harder to deal with in a refinement tool than ordinary algorithmic refinement,since data refinement usually has to be done on a large program component at once.As to the implementation technology of refinement calculus,the data refinement calculator is constructed and an approach for data refinement which is based on data refinement calculus and program window inference is offered.  相似文献   

3.
Efficiency is a problem in automatic programming—both in the programs produced and in the synthesis process itself. The efficiency problem arises because many target-language programs (which vary in their time and space performance) typically satisfy one abstract specification. This paper presents a framework for using analysis and searching knowledge to guide program synthesis in a stepwise refinement paradigm. A particular implementation of the framework, called libra, is described. Given a program specification that includes size and frequency notes, the performance measure to be minimized, and some limits on synthesis resources, libra selects algorithms and data representations and decides whether to use ‘optimizing’ transformations. By applying incremental, algebraic program analysis, explicit rules about plausible implementations, and resource allocation on the basis of decision importance, libra has guided the automatic implementation of a number of programs in the domain of symbolic processing.  相似文献   

4.
The possibility of reexpressing the traditional notion of stepwise refinement as a combination of general problem-solving activities that are based on paradigms taken from artificial intelligence research is discussed. This reexpression can form the basis for a more explicit view of programming as a problem-solving activity. Experiments in which each step of the refinement process is encoded into problem solving activities are described. 26 examples of code implementation using the stepwise refinement of pseudocode have been analyzed. The presence of certain combinations of activities suggest that programmers are implicitly emulating certain paradigms that have proved useful in solving complex problems. Also, a particular paradigm and its associated activities seem to be applied often throughout the refinement sequence for a given problem. The nature of the problem to be solved influences the type of activities performed to achieve a solution, as well as the problem-solving paradigm that they implicitly support  相似文献   

5.
Translation validation was invented in the 90’s by Pnueli et al. as a technique to formally verify the correctness of code generators. Rather than certifying the code generator or exhaustively qualifying it, translation validators attempt to verify that program transformations preserve semantics. In this work, we adopt this approach to formally verify that the clock semantics and data dependence are preserved during the compilation of the Signal compiler. Translation validation is implemented for every compilation phase from the initial phase until the latest phase where the executable code is generated, by proving the transformation in each phase of the compiler preserves the semantics. We represent the clock semantics, the data dependence of a program and its transformed counterpart as first-order formulas which are called clock models and synchronous dependence graphs (SDGs), respectively. We then introduce clock refinement and dependence refinement relations which express the preservations of clock semantics and dependence, as a relation on clock models and SDGs, respectively. Our validator does not require any instrumentation or modification of the compiler, nor any rewriting of the source program.  相似文献   

6.
A top-down method is presented for the derivation of algorithms from a formal specification of a problem. This method has been implemented in a system called cypress. The synthesis process involves the top-down decomposition of the initial specification into a hierarchy of specifications for subproblems. Synthesizing programs for each of these subproblems results in the composition of a hierarchically structured program. The initial specification is allowed to be partial in that some or all of the input conditions may be missing. cypress completes the specification and produces a totally correct applicative program. Much of cypress' knowledge comes in the form of ‘design strategies’ for various classes of algorithms. The structure of a class of divide-and-conquer algorithms is explored and provides the basis for several design strategies. Detailed derivations of mergesort and quicksort algorithms are presented.  相似文献   

7.
8.
9.
An alternative approach to developing reusable components from scratch is to recover them from existing systems. We apply program slicing, a program decomposition method, to the problem of extracting reusable functions from ill structured programs. As with conventional slicing first described by M. Weiser (1984), a slice is obtained by iteratively solving data flow equations based on a program flow graph. We extend the definition of program slice to a transform slice, one that includes statements which contribute directly or indirectly to transform a set of input variables into a set of output variables. Unlike conventional program slicing, these statements do not include either the statements necessary to get input data or the statements which test the binding conditions of the function. Transform slicing presupposes the knowledge that a function is performed in the code and its partial specification, only in terms of input and output data. Using domain knowledge we discuss how to formulate expectations of the functions implemented in the code. In addition to the input/output parameters of the function, the slicing criterion depends on an initial statement, which is difficult to obtain for large programs. Using the notions of decomposition slice and concept validation we show how to produce a set of candidate functions, which are independent of line numbers but must be evaluated with respect to the expected behavior. Although human interaction is required, the limited size of candidate functions makes this task easier than looking for the last function instruction in the original source code  相似文献   

10.
A methodology for the derivation of parallel implementations from program specifications is developed. The goal of the methodology is to decompose a program specification into a collection of module specifications via property refinement, such that each module may be implemented independently by a subprogram. The correctness of the implementation is then deduced from the correctness of the property refinement procedure and the correctness of the individual subprograms. The refinement strategy is based on identifying frequently occurring control structures such as sequential composition and iteration. The methodology is developed in the context of the UNITY logic and the UC programming language, and illustrated through the solution of diffusion aggregation in fluid flow simulations  相似文献   

11.
A machine learning technique called Graph-based induction (GBI) efficiently extracts typical patterns from graph data by stepwise pair expansion (pairwise chunking). In this paper, we introduce GBI for general graph structured data, which can handle directed/undirected, colored/uncolored graphs with/without (self) loop and with colored/uncolored links. We show that its time complexity is almost linear with the size of graph. We, further, show that GBI can effectively be applied to the extraction of typical patterns from DNA sequence data and organochlorine compound data from which are to be generated classification rules, and that GBI also works as a feature construction component for other machine learning tools.  相似文献   

12.
13.
The transformational programming method of algorithm derivation starts with a formal specification of the result to be achieved, plus some informal ideas as to what techniques will be used in the implementation. The formal specification is then transformed into an implementation, by means of correctness-preserving refinement and transformation steps, guided by the informal ideas. The transformation process will typically include the following stages: (1) Formal specification (2) Elaboration of the specification, (3) Divide and conquer to handle the general case (4) Recursion introduction, (5) Recursion removal, if an iterative solution is desired, (6) Optimisation, if required. At any stage in the process, sub-specifications can be extracted and transformed separately. The main difference between this approach and the invariant based programming approach (and similar stepwise refinement methods) is that loops can be introduced and manipulated while maintaining program correctness and with no need to derive loop invariants. Another difference is that at every stage in the process we are working with a correct program: there is never any need for a separate “verification” step. These factors help to ensure that the method is capable of scaling up to the development of large and complex software systems. The method is applied to the derivation of a complex linked list algorithm and produces code which is over twice as fast as the code written by Donald Knuth to solve the same problem.  相似文献   

14.
The PARSE design methodology provides a hierarchical, object-based approach to the development of parallel software systems. A system design is initally structured into a collection of concurrently executing objects which communicate via message-passing. A graphical notation known as the process graph is then used to capture the structural and important dynamic properties of the system. Process graph designs can then be semi-mechanically transformed into complete Petri nets to give a detailed, executable and potentially verifiable design specification. From a complete design, translation rules for target programming languages are defined to enable the implementation to proceed in a systematic manner. The paper describes the steps in PARSE methodology and process graph notation, and illustrates the application of PARSE from design specification to program code using a network protocol example.  相似文献   

15.
A refinement calculus for the development of real-time systems is presented. The calculus is based upon a wide-spectrum language called TAM (the Temporal Agent Model), within which both functional and timing properties can be expressed in either abstract or concrete terms. A specification oriented semantics is given for the language. Program development is considered as a refinement process i.e. thecalculation of a structured program from an unstructured specification. An example program is developed.  相似文献   

16.
A refinement paradigm for implementing a high-level specification in a low-level target language is discussed. In this paradigm, coding and analysis knowledge work together to produce an efficient program in the target language. Since there are many possible implementations for a given specification of a program, searching knowledge is applied to increase the efficiency of the process of finding a good implementation. For example, analysis knowledge is applied to determine upper and lower cost bounds on alternate implementations, and these bounds are used to measure the potential impact of different design decisions and to decide which alternatives should be pursued. In this paper we also describe a particular implementation of this program synthesis paradigm, called PSI/SYN, that has automatically implemented a number of programs in the domain of symbolic processing.  相似文献   

17.
Parallel languages allow the programmer to express parallelism at a high level. The management of parallelism and the generation of interprocessor communication is left to the compiler and the runtime system. This approach to parallel programming is particularly attractive if a suitable widely accepted parallel language is available. High Performance Fortran (HPF) has emerged as the first popular machine independent parallel language, and remarkable progress has been made towards compiling HPF efficiently. However, the performance of HPF programs is often poor and unpredictable, and obtaining adequate performance is a major stumbling block that must be overcome if HPF is to gain widespread acceptance. The programmer is often in the dark about how to improve the performance of an HPF program since poor performance can be attributed to a variety of reasons, including poor choice of algorithm, limited use of parallelism, or an inefficient data mapping. This paper presents a profiling tool that allows the programmer to identify the regions of the program that execute inefficiently, and to focus on the potential causes of poor performance. The central idea is to distinguish the code that is executing efficiently from the code that is executing poorly. Efficient code uses all processors of a parallel system to make progress, while inefficient code causes processors to wait, execute replicated code, idle, communicate, or perform compiler bookkeeping. We designate the latter code as non-scalable, since adding more processors generally does not lead to improved performance for such code. By analogy, the former code is called scalable. The tool presented here separates a program into scalable and non-scalable components and identifies the causes of non-scalability of different components. We show that compiler information is the key to dividing the execution times into logical categories that are meaningful to the programmer. We present the design and implementation of a profiler that is integrated with Fx, a compiler for a variant of HPF. The paper includes two examples that demonstrate how the data reported by the profiler are used to identify and resolve performance bugs in parallel programs. © 1997 John Wiley & Sons, Ltd.  相似文献   

18.
The refinement calculus is a well-established theory for formal development of imperative program code and is supported by a number of automated tools. Via a detailed case study, this article shows how refinement theory and tool support can be extended for a program with real-time constraints. The approach adapts a timed variant of the refinement calculus and makes corresponding enhancements to a theorem-prover based refinement tool.  相似文献   

19.
Although dataflow computers have many attractive features, skepticism exists concerning their efficiency in handling arrays (vectors) in high performance scientific computation. This paper outlines an efficient implementation scheme for arrays in applicative languages (such as VAL and SISAL) based on the principles of dataflow software pipelining. It illustrates how the fine-grain parallelism of dataflow approach can effectively handle large amount of data structured in applicative array operations. This is done through dataflow software pipelining between pairs of code blocks which act as producer-consumer of array values. To make effective use of the pipelined code mapping scheme, a compiler needs information concerning the overall program structure as well as the structure of each code block. An applicative language provides a basis for such analysis.

The program transformation techniques described here are developed primarily for the computationally intensive part of a scientific numerical program, which is usually formed by one or a few clusters of acyclic connected code blocks. Each code block defines an array value from several input arrays. We outline how mapping decisions of arrays can be based on a global analysis of attributes of the code blocks. We emphasize the role of overall program structure and the strategy of global optimization of the machine code structure. The structure of a proposed dataflow compiler based on the scheme described in this paper is outlined.  相似文献   


20.
Reflective systems allow their own structures to be altered from within. Here we are concerned with a style of reflection, called linguistic reflection, which is the ability of a running program to generate new program fragments and to integrate these into its own execution. In particular, we describe how this kind of reflection may be provided in the compiler-based, strongly typed object-oriented programming language Java. The advantages of the programming technique include attaining high levels of genericity and accommodating system evolution. These advantages are illustrated by an example taken from persistent programming, which shows how linguistic reflection allows functionality (program code) to be generated on demand (Just-In-Time) from a generic specification and integrated into the evolving running program. The technique is evaluated against alternative implementation approaches with respect to efficiency, safety and ease of use. © 1998 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号