首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 608 毫秒
1.
We introduce the idea of optimisation validation, which is to formally establish that an instance of an optimising transformation indeed improves with respect to some resource measure. This is related to, but in contrast with, translation validation, which aims to establish that a particular instance of a transformation undertaken by an optimising compiler is semantics preserving. Our main setting is a program logic for a subset of Java bytecode, which is sound and complete for a resource-annotated operational semantics. The latter employs resource algebras for measuring dynamic costs such as time, space and more elaborate examples. We describe examples of optimisation validation that we have formally verified in Isabelle/HOL using the logic. We also introduce a type and effect system for measuring static costs such as code size, which is proved consistent with the operational semantics.  相似文献   

2.
随着安全关键系统对计算性能要求的日趋提高,能够提供更强计算能力而又减少电子设备的体积、重量和功耗的多核处理器将在安全关键领域得到广泛应用.同步语言能够表达确定性并发行为且具有精确时间语义等特性,适用于安全关键软件的建模和验证.目前,同步语言SIGNAL编译器主要支持串行代码生成,较少关注多线程代码生成.提出一种同步语言SIGNAL多线程代码生成工具.首先将SIGNAL程序转换为经过时钟演算的S-CGA中间程序;之后将S-CGA中间程序转换为时钟数据依赖图以分析依赖关系;然后对时钟数据依赖图进行拓扑排序划分,并针对划分结果提出优化算法和基于流水线方式的任务划分方法;最后将划分结果转换为虚拟多线程结构并进一步生成可执行多线程C/Java代码.通过在多核处理器上的实验,验证了所提方法的有效性.  相似文献   

3.
The translation validation approach involves establishing semantics preservation of individual compilations. In this paper, we present a novel framework for translation validation of optimizers. We identify a comprehensive set of primitive program transformations that are commonly used in many optimizations. For each primitive, we define soundness conditions that guarantee that the transformation is semantics preserving. This framework of transformations and soundness conditions is independent of any particular compiler implementation and is formalized in PVS. An optimizer is instrumented to generate the trace of an optimization run in terms of the predefined transformation primitives. The validation succeeds if (1) the trace conforms to the optimization and (2) the soundness conditions of the individual transformations in the trace are satisfied. The first step eliminates the need to trust the instrumentation. The soundness conditions are defined in a temporal logic and therefore the second step involves model checking. Thus the scheme is completely automatable. We have applied this approach to several intraprocedural optimizations of RTL intermediate code in GNU Compiler Collection (GCC) v4.1.0, namely, loop invariant code motion, partial redundancy elimination, lazy code motion, code hoisting, and copy and constant propagation for sample programs written in a subset of the C language. The validation does not require information about program analyses performed by GCC. Therefore even though the GCC code base is quite large and complex, instrumentation could be achieved easily. The framework requires an estimated 21 lines of instrumentation code and 140 lines of PVS specifications for every 1000 lines of the GCC code considered for validation. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

4.
Translation validation is an approach for validating the output of optimizing compilers. Rather than verifying the compiler itself, translation validation mandates that every run of the compiler generate a formal proof that the produced target code is a correct implementation of the source code. Speculative loop optimizations are aggressive optimizations which are only correct under certain conditions which cannot be validated at compile time. We propose using an automatic theorem prover together with the translation validation framework to automatically generate run-time tests for such speculative optimizations. This run-time validation approach must not only detect the conditions under which an optimization generates incorrect code, but also provide a way to recover from the optimization without aborting the program or producing an incorrect result. In this paper, we apply the run-time validation technique to a class of speculative reordering transformations and give some initial results of run-time tests generated by the theorem prover CVC.  相似文献   

5.
In this paper we describe an algebraic approach to construct provably correct compilers for object-oriented languages; this is illustrated for programs written in a language similar to a sequential subset of Java. It includes recursive classes, inheritance, dynamic binding, recursion, type casts and test, assignment, and class-based visibility, but a copy semantics. In our approach, we tackle the problem of compiler correctness by reducing the task of compilation to that of program refinement. Compilation is identified with the reduction of a source program to a normal form that models the execution of object code. The normal form is generated by a series of correctness-preserving transformations that are proved sound from the basic laws of the language; therefore it is correct by construction. The main advantages of our approach are the characterisation of compilation within a uniform framework, where comparisons and translations between semantics are avoided, and the modularity and extensibility of the resulting compiler.  相似文献   

6.
A Vectorizing Compiler for Multimedia Extensions   总被引:6,自引:0,他引:6  
In this paper, we present an implementation of a vectorizing C compiler for Intel's MMX (Multimedia Extension). This compiler would identify data parallel sections of the code using scalar and array dependence analysis. To enhance the scope for application of the subword semantics, our compiler performs several code transformations. These include strip mining, scalar expansion, grouping and reduction, and distribution. Thereafter inline assembly instructions corresponding to the data parallel sections are generated. We have used the Stanford University Intermediate Format (SUIF), a public domain compiler tool, for our implementation. We evaluated the performance of the code generated by our compiler for a number of benchmarks. Initial performance results reveal that our compiler generated code produces a reasonable performance improvement (speedup of 2 to 6.5) over the the code generated without the vectorizing transformations/inline assembly. In certain cases, the performance of the compiler generated code is within 85% of the hand-tuned code for MMX architecture.  相似文献   

7.
为最大程度地减少同步数据流语言编译过程中由编译器引入的错误,需要利用形式化方法自动生成代码,保证编译器产生的代码能够应用于核能仪控系统.本研究使用定理证明工具Coq,对同步数据流语言Lustre到Clight的主节点输入结构翻译阶段涉及的语法、语义及翻译算法进行了形式化定义,并完成翻译算法的形式化证明.研究表明这种经过形式化的编译器能够生成与源代码行为一致的可信目标代码,同时生成的目标代码能够很好满足核能仪控系统的执行规范.  相似文献   

8.
When developing safety-critical software, it is the correctness of the object code that is paramount. However, it is desirable to perform formal verification on the source program. To ensure that correctness results proved about the source program do apply to the object code, the compiler used can be formally verified. However, care must be taken to ensure that the compiler correctness theorem proved is suitable. We have combined a derived programming logic with a verified compiler for a generic subset of the Vista structured assembly language. We show how correctness properties of object code can be formally derived from corresponding correctness properties of the source program which have been proved using the programming logic. Thus we can be sure the results do apply to the object code. The work described has been performed using the HOL system and so is machine-checked.  相似文献   

9.
The paper presents approaches to the validation of optimizing compilers. The emphasis is on aggressive and architecture-targeted optimizations which try to obtain the highest performance from modern architectures, in particular EPIC-like micro-processors. Rather than verify the compiler, the approach of translation validation performs a validation check after every run of the compiler, producing a formal proof that the produced target code is a correct implementation of the source code.First we survey the standard approach to validation of optimizations which preserve the loop structure of the code (though they may move code in and out of loops and radically modify individual statements), present a simulation-based general technique for validating such optimizations, and describe a tool, VOC-64, which implements these technique. For more aggressive optimizations which, typically, alter the loop structure of the code, such as loop distribution and fusion, loop tiling, and loop interchanges, we present a set of permutation rules which establish that the transformed code satisfies all the implied data dependencies necessary for the validity of the considered transformation. We describe the necessary extensions to the VOC-64 in order to validate these structure-modifying optimizations.Finally, the paper discusses preliminary work on run-time validation of speculative loop optimizations, that involves using run-time tests to ensure the correctness of loop optimizations which neither the compiler nor compiler-validation techniques can guarantee the correctness of. Unlike compiler validation, run-time validation has not only the task of determining when an optimization has generated incorrect code, but also has the task of recovering from the optimization without aborting the program or producing an incorrect result. This technique has been applied to several loop optimizations, including loop interchange, loop tiling, and software pipelining and appears to be quite promising.  相似文献   

10.
We study issues in verifying compilers for modern imperative and object-oriented languages. We take the view that it is not the compiler but the code generated by it which must be correct. It is this subtle difference that allows for reusing standard compiler architecture, construction methods and tools also in a verifying compiler.Program checking is the main technique for avoiding the cumbersome task of verifying most parts of a compiler and the tools by which they are generated. Program checking remaps the result of a compiler phase to its origin, the input of this phase, in a provably correct manner. We then only have to compare the actual input to its regenerated form, a basically syntactic process. The correctness proof of the generation of the result is replaced by the correctness proof of the remapping process. The latter turns out to be far easier than proving the generating process correct.The only part of a compiler where program checking does not seem to work is the transformation step which replaces source language constructs and their semantics, given, e.g., by an attributed syntax tree, by an intermediate representation, e.g., in SSA-form, which is expressing the same program but in terms of the target machine. This transformation phase must be directly proven using Hoare logic and/or theorem-provers. However, we can show that given the features of today's programming languages and hardware architectures this transformation is to a large extent universal: it can be reused for any pair of source and target language. To achieve this goal we investigate annotating the syntax tree as well as the intermediate representation with constraints for exhibiting specific properties of the source language. Such annotations are necessary during code optimization anyway.  相似文献   

11.
This paper presents new approaches to the validation of loop optimizations that compilers use to obtain the highest performance from modern architectures. Rather than verify the compiler, the approach of translation validationperforms a validation check after every run of the compiler, producing a formal proof that the produced target code is a correct implementation of the source code. As part of an active and ongoing research project on translation validation, we have previously described approaches for validating optimizations that preserve the loop structure of the code and have presented a simulation-based general technique for validating such optimizations. In this paper, for more aggressive optimizations that alter the loop structure of the code—such as distribution, fusion, tiling, and interchange—we present a set of permutation ruleswhich establish that the transformed code satisfies all the implied data dependencies necessary for the validity of the considered transformation. We describe the extensions to our tool voc-64 which are required to validate these structure-modifying optimizations. This paper also discusses preliminary work on run-time validation of speculative loop optimizations. This involves using run-time tests to ensure the correctness of loop optimizations whose correctness cannot be guaranteed at compile time. Unlike compiler validation, run-time validation must not only determine when an optimization has generated incorrect code, but also recover from the optimization without aborting the program or producing an incorrect result. This technique has been applied to several loop optimizations, including loop interchange and loop tiling, and appears to be quite promising. This research was supported in part by NSF grant CCR-0098299, ONR grant N00014-99-1-0131, and the John von Neumann Minerva Center for Verification of Reactive Systems.  相似文献   

12.
13.
投机优化技术作为一种先进的现代编译技术,有效地提高了指令执行的并行性。然而,在逆向工程中,有时要实现代码的跨平台移植,而投机优化技术又受硬件平台的制约;有时需要优化代码的结构,使程序的逻辑结构易于理解;这些都要求消除这种与硬件息息相关的优化技术。论文基于IA-64平台,提出了一种反投机处理算法,对ICC编译器编译后的可执行二进制代码进行处理,消除代码中的投机优化,将其转换成等价的没有投机优化的指令序列,这样使反投机后的代码更容易理解,而且在逆向工程中摆脱了硬件的限制。测试表明该反投机技术可以对ICC编译后的代码进行有效处理。  相似文献   

14.
Modern compilers are responsible for translating the idealistic operational semantics of the source program into a form that makes efficient use of a highly complex heterogeneous machine. Since optimization problems are associated with huge and unstructured search spaces, this combinational task is poorly achieved in general, resulting in weak scalability and disappointing sustained performance. We address this challenge by working on the program representation itself, using a semi-automatic optimization approach to demonstrate that current compilers offen suffer from unnecessary constraints and intricacies that can be avoided in a semantically richer transformation framework. Technically, the purpose of this paper is threefold: (1) to show that syntactic code representations close to the operational semantics lead to rigid phase ordering and cumbersome expression of architecture-aware loop transformations, (2) to illustrate how complex transformation sequences may be needed to achieve significant performance benefits, (3) to facilitate the automatic search for program transformation sequences, improving on classical polyhedral representations to better support operation research strategies in a simpler, structured search space. The proposed framework relies on a unified polyhedral representation of loops and statements, using normalization rules to allow flexible and expressive transformation sequencing. Thisrepresentation allows to extend the scalability of polyhedral dependence analysis, and to delay the (automatic) legality checks until the end of a transformation sequence. Our work leverages on algorithmic advances in polyhedral code generation and has been implemented in a modern research compiler.  相似文献   

15.
Data refinement in a state-based language such as Z is defined using a relational model in terms of the behaviour of abstract programs. Downward and upward simulation conditions form a sound and jointly complete methodology to verify relational data refinements, which can be checked on an event-by-event basis rather than per trace. In models of concurrency, refinement is often defined in terms of sets of observations, which can include the events a system is prepared to accept or refuse, or depend on explicit properties of states and transitions. By embedding such concurrent semantics into a relational framework, eventwise verification methods for such refinement relations can be derived. In this paper, we continue our program of deriving simulation conditions for process algebraic refinement by defining further embeddings into our relational model: traces, completed traces, failure traces and extension. We then extend our framework to include various notions of automata based refinement.  相似文献   

16.
We present a comprehensive refinement calculus for the development of sequential, real-time programs from real-time specifications. A specification may include not only execution time limits, but also requirements on the behaviour of outputs over the duration of the execution of the program. The approach allows refinement steps that separate timing constraints and functional requirements. New rules are provided for handling timing constraints, but the refinement of components implementing functional requirements is essentially the same as in the standard refinement calculus. The product of the refinement process is a program in the target programming language extended with timing deadline directives. The extended language is a machine-independent, real-time programming language. To provide valid machine code for a particular model of machine, the machine code produced by a compiler must be analysed to guarantee that it meets the specified timing deadlines. Received: 27 September 1997 / 13 June 2000  相似文献   

17.
We propose a framework based on an original generation and use of algorithmic skeletons, and dedicated to speculative parallelization of scientific nested loop kernels, able to apply at run-time polyhedral transformations to the target code in order to exhibit parallelism and data locality. Parallel code generation is achieved almost at no cost by using binary algorithmic skeletons that are generated at compile-time, and that embed the original code and operations devoted to instantiate a polyhedral parallelizing transformation and to verify the speculations on dependences. The skeletons are patched at run-time to generate the executable code. The run-time process includes a transformation selection guided by online profiling phases on short samples, using an instrumented version of the code. During this phase, the accessed memory addresses are used to compute on-the-fly dependence distance vectors, and are also interpolated to build a predictor of the forthcoming accesses. Interpolating functions and distance vectors are then employed for dependence analysis to select a parallelizing transformation that, if the prediction is correct, does not induce any rollback during execution. In order to ensure that the rollback time overhead stays low, the code is executed in successive slices of the outermost original loop of the nest. Each slice can be either a parallel version which instantiates a skeleton, a sequential original version, or an instrumented version. Moreover, such slicing of the execution provides the opportunity of transforming differently the code to adapt to the observed execution phases, by patching differently one of the pre-built skeletons. The framework has been implemented with extensions of the LLVM compiler and an x86-64 runtime system. Significant speed-ups are shown on a set of benchmarks that could not have been handled efficiently by a compiler.  相似文献   

18.
We present an approach to software model checking based on game semantics and the CSP process algebra. Open program fragments (i.e. terms-in-context) are compositionally modelled as CSP processes which represent their game semantics. This translation is performed by a prototype compiler. Observational equivalence and regular properties are checked by traces refinement using the FDR tool. We also present theorems for parameterised verification of polymorphic terms and properties. The effectiveness of the approach is evaluated on several examples. We acknowledge support by the EPSRC (GR/S52759/01). The second author was also supported by the Intel Corporation, and is also affiliated to the Mathematical Institute, Serbian Academy of Sciences and Arts, Belgrade  相似文献   

19.
Light-weight formal method has been regarded as an important approach to development of component-based safety critical systems. The paper proposes an approach which can formally specify and verify the contract of static structure, dynamic behavior and refinement of component systems based on UML 2.0 superstructure. As results, the correctness of static contract can be obtained via type checking of interfaces and connectors. Dynamic contract can be verified through determining the cooperativeness of integrated components, whose contracts are depicted with interface protocol state machines and their semantics models, namely contract automata. The refinement relation between high level component and its implementation will be guaranteed through defining the alternating simulation between contract automata of components at different levels.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号