共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
A pointer logic and certifying compiler 总被引:6,自引:0,他引:6
Chen Yiyun Ge Lin Hua Baojian Li Zhaopeng Liu Cheng Wang Zhifang 《Frontiers of Computer Science in China》2007,1(3):297-312
Proof-Carrying Code brings two big challenges to the research field of programming languages. One is to seek more expressive
logics or type systems to specify or reason about the properties of low-level or high-level programs. The other is to study
the technology of certifying compilation in which the compiler generates proofs for programs with annotations. This paper
presents our progress in the above two aspects. A pointer logic was designed for PointerC (a C-like programming language)
in our research. As an extension of Hoare logic, our pointer logic expresses the change of pointer information for each statement
in its inference rules to support program verification. Meanwhile, based on the ideas from CAP (Certified Assembly Programming)
and SCAP (Stack-based Certified Assembly Programming), a reasoning framework was built to verify the properties of object
code in a Hoare style. And a certifying compiler prototype for PointerC was implemented based on this framework.
The main contribution of this paper is the design of the pointer logic and the implementation of the certifying compiler prototype.
In our certifying compiler, the source language contains rich pointer types and operations and also supports dynamic storage
allocation and deallocation. 相似文献
3.
Qingda Lu Xiaoyang Gao Sriram Krishnamoorthy Gerald Baumgartner J. Ramanujam P. Sadayappan 《Journal of Parallel and Distributed Computing》2012
Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined in ATLAS by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empirically measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to select library calls and choose data layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry. 相似文献
4.
《Microprocessors and Microsystems》2005,29(2-3):51-62
The DEFACTO compilation and synthesis system is capable of automatically mapping computations expressed in high-level imperative programming languages as C to FPGA-based systems. DEFACTO combines parallelizing compiler technology with behavioral VHDI, synthesis tools to guide the application of high-level compiler transformations in the search of high-quality hardware designs. In this article we illustrate the effectiveness of this approach in automatically mapping several kernel codes to an FPGA quickly and correctly. We also present a detailed example of the comparison of the performance of an automatically generated design against a manually generated implementation of the same computation. The design-space-exploration component of DEFACTO is able to explore a large number of designs for a particular computation that would otherwise be impractical for any designers. 相似文献
5.
In recent years, the GPU (graphics processing unit) has evolved into an extremely powerful and flexible processor, with it
now representing an attractive platform for general-purpose computation. Moreover, changes to the design and programmability
of GPUs provide the opportunity to perform general-purpose computation on a GPU (GPGPU). Even though many programming languages,
software tools, and libraries have been proposed to facilitate GPGPU programming, the unusual and specific programming model
of the GPU remains a significant barrier to writing GPGPU programs. In this paper, we introduce a novel compiler-based approach
for GPGPU programming. Compiler directives are used to label code fragments that are to be executed on the GPU. Our GPGPU
compiler, Guru, converts the labeled code fragments into ISO-compliant C code that contains appropriate OpenGL and Cg APIs.
A native C compiler can then be used to compile it into the executable code for GPU. Our compiler is implemented based on
the Open64 compiler infrastructure. Preliminary experimental results from selected benchmarks show that our compiler produces
significant performance improvements for programs that exhibit a high degree of data parallelism. 相似文献
6.
The production system is a theoretical model of computation relevant to the artificial intelligence field allowing for problem
solving procedures such as hierarchical tree search. In this work we explore some of the connections between artificial intelligence
and quantum computation by presenting a model for a quantum production system. Our approach focuses on initially developing
a model for a reversible production system which is a simple mapping of Bennett’s reversible Turing machine. We then expand
on this result in order to accommodate for the requirements of quantum computation. We present the details of how our proposition
can be used alongside Grover’s algorithm in order to yield a speedup comparatively to its classical counterpart. We discuss
the requirements associated with such a speedup and how it compares against a similar quantum hierarchical search approach. 相似文献
7.
《The Journal of Logic and Algebraic Programming》2012,81(3):181-208
We derive a Floyd–Hoare logic for non-local jumps and mutable higher-order procedural variables from a formulæ-as-types notion of control for classical logic. A key contribution of this work is the design of an imperative dependent type system for Hoare triples, which corresponds to classical logic, but where the famous consequence rule is admissible. Moreover, we prove that this system is complete for a reasonable notion of validity for Hoare judgments. 相似文献
8.
Shao Zhiqing 《计算机科学技术学报》1992,7(4):363-368
In the paper we generalize the while-rule in Hoare calculus to an infinite one and then present a sufficient condition much weaker than the expressiveness for Cook‘2 relative completeness theorem with respect to our new axiomatic system.Using the extended Hoare calculus we can derive true Hoare formulas which contain while statements free of loop invariants.It is also pointed out that the weak condition is a first order property and therefore provides a possible approach to the characterization of relative completeness which is also a first order property. 相似文献
9.
The programming language pascal 总被引:2,自引:0,他引:2
Prof. Dr. N. Wirth 《Acta Informatica》1971,1(1):35-63
Summary A programming language called Pascal is described which was developed on the basis ofAlgol 60. Compared toAlgol 60, its range of applicability is considerably increased due to a variety of data structuring facilities. In view of its intended usage both as a convenient basis to teach programming and as an efficient tool to write large programs, emphasis was placed on keeping the number of fundamental concepts reasonably small, on a simple and systematic language structure, and on efficient implementability. A one-pass compiler has been constructed for the CDC 6000 computer family; it is expressed entirely in terms of Pascal itself.Fachgruppe Computer-Wissenschaften, Eidg. Technische Hochschule, Zürich, Schweiz.The author gratefully acknowledges his indeptedness to C. A. R. Hoare for his many valuable suggestions concerning overall design strategy as well as details, and for his critical scrutiny of this paper. 相似文献
10.
This paper addresses the issue of compiler correctness. The approach taken is to systematically construct a correct compiler for a language from a formal semantic definition of the language. For this purpose, an operational semantics of a language is chosen as the basis for the approach. That is, the compiler for a language is derived from an interpreter of the language. The derivation process uses the notion of mixed computation proposed by Ershov. Briefly stated, one begins interpreting and when a primitive state changing instruction is about to be executed, the instruction is emitted as code instead. The correctness of all compilers produced by the method is guaranteed by proving the derivation rules correct. This proof is a one-time task for each specification language. The specification language studied in this paper is the Vienna Definition Language (VDL). The object code generated by the compiler is in an intermediate language close to an assembly language. Therefore, the translation from the intermediate language into the assembly language should be straightforward. 相似文献
11.
We have implemented a compiler for key parts of Modelica, an object-oriented language supporting equation-based modeling and simulation of complex physical systems. The compiler is extensible, to support experiments with emerging tools for physical models. To achieve extensibility, the implementation is done declaratively in JastAdd, a metacompilation system supporting modern attribute grammar mechanisms such as reference attributes and nonterminal attributes.This paper reports on experiences from this implementation. For name and type analyses, we illustrate how declarative design strategies, originally developed for a Java compiler, could be reused to support Modelica’s advanced features of multiple inheritance and structural subtyping. Furthermore, we present new general design strategies for declarative generation of target ASTs from source ASTs. We illustrate how these strategies are used to resolve a generics-like feature of Modelica called modifications, and to support flattening, a fundamental part of Modelica compilation. To validate that the approach is practical, we have compared the execution speed of our compiler to two existing Modelica compilers. 相似文献
12.
13.
《Journal of Parallel and Distributed Computing》1994,21(1):15-26
This paper describes the design of the Fortran90D/HPF compiler, a source-to-source parallel compiler for distributed memory systems being developed at Syracuse University. Fortran 90D/HPF is a data parallel language with special directives to specify data alignment and distributions. A systematic methodology to process distribution directives of Fortran 90D/HPF is presented. Furthermore, techniques for data and computation partitioning, communication detection and generation, and the run-time support for the compiler are discussed. Finally, initial performance results for the compiler are presented. We believe that the methodology to process data distribution, computation partitioning, communication system design, and the overall compiler design can be used by the implementors of compilers for HPF. 相似文献
14.
We study issues in verifying compilers for modern imperative and object-oriented languages. We take the view that it is not the compiler but the code generated by it which must be correct. It is this subtle difference that allows for reusing standard compiler architecture, construction methods and tools also in a verifying compiler.Program checking is the main technique for avoiding the cumbersome task of verifying most parts of a compiler and the tools by which they are generated. Program checking remaps the result of a compiler phase to its origin, the input of this phase, in a provably correct manner. We then only have to compare the actual input to its regenerated form, a basically syntactic process. The correctness proof of the generation of the result is replaced by the correctness proof of the remapping process. The latter turns out to be far easier than proving the generating process correct.The only part of a compiler where program checking does not seem to work is the transformation step which replaces source language constructs and their semantics, given, e.g., by an attributed syntax tree, by an intermediate representation, e.g., in SSA-form, which is expressing the same program but in terms of the target machine. This transformation phase must be directly proven using Hoare logic and/or theorem-provers. However, we can show that given the features of today's programming languages and hardware architectures this transformation is to a large extent universal: it can be reused for any pair of source and target language. To achieve this goal we investigate annotating the syntax tree as well as the intermediate representation with constraints for exhibiting specific properties of the source language. Such annotations are necessary during code optimization anyway. 相似文献
15.
Ting Gao Fengli Yan Zhixi Wang Youcheng Li 《Frontiers of Computer Science in China》2008,2(2):179-189
In this article we make a review on the usefulness of probabilistically cloning and present examples of quantum computation
tasks for which quantum cloning offers an advantage which cannot be matched by any approach that does not resort to it. In
these quantum computations, one needs to distribute quantum information contained in states about which we have some partial
information. To perform quantum computations, one uses state-dependent probabilistic quantum cloning procedure to distribute
quantum information in the middle of a quantum computation. And we discuss the achievable efficiencies and the efficient quantum
logic network for probabilistic cloning the quantum states used in implementing quantum computation tasks for which cloning
provides enhancement in performance. 相似文献
16.
Jonathan S. Ostroff Faraz Ahmadi Torshizi Hai Feng Huang Bernd Schoeller 《Formal Aspects of Computing》2009,21(4):319-346
SCOOP is a concurrent programming language with a new semantics for contracts that applies equally well in concurrent and
sequential contexts. SCOOP eliminates race conditions and atomicity violations by construction. However, it is still vulnerable
to deadlocks. In this paper we describe how far contracts can take us in verifying interesting properties of concurrent systems
using modular Hoare rules and show how theorem proving methods developed for sequential Eiffel can be extended to the concurrent
case. However, some safety and liveness properties depend upon the environment and cannot be proved using the Hoare rules.
To deal with such system properties, we outline a SCOOP Virtual Machine (SVM) as a fair transition system. The SVM makes it
feasible to use model-checking and theorem proving methods for checking global temporal logic properties of SCOOP programs.
The SVM uses the Hoare rules where applicable to reduce the number of steps in a computation.
P. J. Brooke, R. F. Paige and Dong Jin Song
This work was conducted under an NSERC Discovery grant. 相似文献
17.
Fortran 95 is used often for “number crunching”: scientific and engineering applications where performance is important and which operate with large datasets. The language allows the implementation of certain elements of object oriented design, which facilitate code expansion, reuse and maintenance. In this paper we discuss two series of tests to measure how different object oriented design elements of Fortran 95 affect overall performance. The first series of tests consists of several implementations for multiplying two matrices. These tests are focused exclusively on computation time, not considering other parts of the object life cycle such as construction and destruction. The second series consists of computing a finite element matrix for a diffusion term. A more complex environment with different classes is studied. Here, we consider not only the time spent doing useful computations but the integral object life cycle. Results show that object oriented design comes at a cost in all cases. However, given the right compiler, using the right compiler optimization techniques and keeping the amount of objects and method calls low, Fortran 95 designs can benefit from object oriented design techniques with a less than 10% running time increase. 相似文献
18.
《Science of Computer Programming》1987,9(2):107-136
Versions of Hoare logic have been introduced to prove partial and total correctness properties of programs. In this paper it is shown how a Hoare-like proof system for while programs may be extended to prove properties of the computation time as well. It should be stressed that the system does not require the programs to be modified by inserting explicit operations upon a clock variable. We generalize the notions of arithmetically sound and complete and show that the proof system satisfies these. Also we derive formal rules corresponding to the informal rules for determining the computation time of while programs. The applicability of the proof system is illustrated by an example, the bubble sorting algorithm. 相似文献
19.
20.
Gerald Gilbert Michael Hamrick Yaakov S. Weinstein 《Quantum Information Processing》2008,7(6):263-274
In this review we survey both standard fault tolerance theory and Kitaev’s model for quantum computation, and demonstrate
how they can be combined to yield quantitative results that reveal the interplay between the two. This analysis establishes
a methodology allowing one to quantitatively determine design parameters for quantum computers, the values of which ensure
that an overall computation yields a correct final result with some prescribed probability of success, as opposed to merely ensuring that the desired final quantum state is obtained. As an example, we explicitly calculate the number of levels of error correction concatenation needed to achieve
a correct final result with some prescribed success probability. This methodology allows one to determine parameters required
in order to achieve the correct final result for the quantum computation, as opposed to merely ensuring that the desired final
quantum state is produced.
相似文献