首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract machine modelling is a popular technique for developing portable compilers. A compiler can be quickly realized by translating the abstract machine operations to target machine operations. The problem with these compilers is that they trade execution efficiency for portability. Typically, the code emitted by these compilers runs two to three times slower than the code generated by compilers that employ sophisticated code generators. This paper describes a C compiler that uses abstract machine modelling to achieve portability. The emitted target machine code is improved by a simple, classical rule-directed peephole optimizer. Our experiments with this compiler on four machines show that a small number of very general handwritten patterns (under 40) yields code that is comparable to the code from compilers that use more sophisticated code generators. As an added bonus, compilation time on some machines is reduced by 10 to 20 per cent.  相似文献   

2.
This paper describes MATISSE, a compiler able to translate a MATLAB subset to C targeting embedded systems. MATISSE uses LARA, an aspect‐oriented programming language, to specify additional information and transformations to the input MATLAB code, for example, insertion of code for initialization of variables, and specification of types and shapes of variables. The compiler is being developed bearing in mind flexibility, multitarget and multitoolchain support, allowing for the generation of several implementations in C from the same reference code in MATLAB. In this paper, we also present a number of techniques being employed in MATLAB to C compilation, such as element‐wise mapping operations, matrix views, weak types, and intrinsics. We validate these techniques using MATISSE and a set of representative benchmarks. More specifically, we evaluate the compiler with a set of 31 benchmarks using an embedded system board and a desktop computer. The results show speedups up to 1.8× by employing information provided by LARA aspects, when compared with C code generated without additional user information. When compared with the execution time of the original code running on MATLAB, the execution time of the generated C code achieved a geometric mean speedup of 13×. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
Kasi Anantha  Fred Long 《Software》1990,20(6):537-554
There are two principal methods used to exploit the parallelism available on a parallel machine: the program to be executed can be optimized by hand, or the program can be automatically converted to parallel machine code by a compiler. The first method usually derives parallelism at the procedure level; a parallel program is written in a high-level language and typically has various modules executing in parallel. By contrast, the compiler methodically transforms the program into parallel code using various transformations, such as code movement. The automatic conversion of a program to parallel code is called compaction or parallelization. This paper describes the evolution of a new compaction program and presents a new algorithm for determining legal code movements. A simulator of the target architecture was used to estimate the execution times of a sample suite of programs before and after compaction. The results verify that substantial advantages arise from applying this compaction technique.  相似文献   

4.
A certifying compiler takes a source language program and produces object code, as well as a certificate that can be used to verify that the object code satisfies desirable properties, such as type safety and memory safety. Certifying compilation helps to increase both compiler robustness and program safety. Compiler robustness is improved since some compiler errors can be caught by checking the object code against the certificate immediately after compilation. Program safety is improved because the object code and certificate alone are sufficient to establish safety: even if the object code and certificate are produced on an unknown machine by an unknown compiler and sent over an untrusted network, safe execution is guaranteed as long as the code and certificate pass the verifier.Existing work in certifying compilation has addressed statically generated code. In this paper, we extend this to code generated at run time. Our goal is to combine certifying compilation with run-time code generation to produce programs that are both fast and verifiably safe. To achieve this goal, we present two new languages with explicit run-time code generation constructs: Cyclone, a type safe dialect of C, and TAL/T, a type safe assembly language. We have designed and implemented a system that translates a safe C program into Cyclone, which is then compiled to TAL/T, and finally assembled into executable object code. This paper focuses on our overall approach and the front end of our system; details about TAL/T will appear in a subsequent paper.  相似文献   

5.
并行HDL模拟是加速大型复杂的VLSI系统模拟验证的有效方法,支持并行模拟的HDL编译技术是其中的关键技术,文章提出了一种支持并行模拟的Verilog编译技术,编译器将Verilog描述转换成C++代码,最后与并行模拟核心库编译链接生成可执行并行程序。文章将编译器构成,代码生成方法和并行模拟核心库,该技术已经在并行Verilog模拟器ParaVer上实现。  相似文献   

6.
One of the most promising approaches to Java acceleration in embedded systems is a bytecode-to-C ahead-of-time compiler (AOTC). It improves the performance of a Java virtual machine (JVM) by translating bytecode into C code, which is then compiled into machine code via an existing C compiler. One important design issue in AOTC is efficient exception handling. Since the excepting point and the exception handler may locate in different methods on a call stack, control transfer between them should be streamlined, while an exception would be an “exceptional” event, so it should not slow down normal execution paths. Previous AOTCs often employed a technique called stack cutting based on a setjmp()/longjmp() pair, which we found is involved with too much performance overheads. Also, when the AOTC and the interpreter are employed concurrently (e.g., some methods are AOTCed while other methods are interpreted), the performance of normal execution paths is affected more seriously. This paper proposes a simpler solution based on an exception check after each method call, merged with garbage collection check for reducing its overhead. Our evaluation results on SPECjvm98 on Sun's CVM indicate that our technique can improve the performance of stack cutting by more than 25%. A similar performance benefit can be noted on a hybrid execution environment of both the AOTC and the interpreter.  相似文献   

7.
8.
There is an enormous amount of parallelism exposed to fine-grain multithreaded architectures to cover latencies. It is a demanding task for a multithreading programmer to manage such a degree of parallelism by hand. To use multithreaded architectures efficiently it is essential to have compiler support for automatically partitioning programs into threads. This paper solves a fundamental problem in compiling for multithreaded architectures, automatically partitioning a program into threads. The focus of such partitioning is to overlap the remote communication latency and minimize the total execution time. We first formulate the partitioning problem based on a multithreaded execution cost model. Then, we prove such a formulation is NP-hard. Therefore, we propose two heuristic thread-partitioning methods to solve this problem in practice. The advanced partitioning algorithm is a novel extension of list scheduling, and it takes advantage of the cost model to generate near-optimum partitioning results. The remote-path-based partitioning algorithm is a simplified version of the advanced one but it is easy for compiler implementation. The two partitioning algorithms were implemented respectively in a thread partitioning testbed and a research EARTH-C compiler. The experimental results show that both partitioning algorithms are effective to generate efficient threaded code, and code generated by the compiler is comparable to hand-written code.  相似文献   

9.
10.
This paper describes compiler techniques that can translate standard OpenMP applications into code for distributed computer systems. OpenMP has emerged as an important model and language extension for shared-memory parallel programming. However, despite OpenMP's success on these platforms, it is not currently being used on distributed system. The long-term goal of our project is to quantify the degree to which such a use is possible and develop supporting compiler techniques. Our present compiler techniques translate OpenMP programs into a form suitable for execution on a Software DSM system. We have implemented a compiler that performs this basic translation, and we have studied a number of hand optimizations that improve the baseline performance. Our approach complements related efforts that have proposed language extensions for efficient execution of OpenMP programs on distributed systems. Our results show that, while kernel benchmarks can show high efficiency of OpenMP programs on distributed systems, full applications need careful consideration of shared data access patterns. A naive translation (similar to OpenMP compilers for SMPs) leads to acceptable performance in very few applications only. However, additional optimizations, including access privatization, selective touch, and dynamic scheduling, resulting in 31% average improvement on our benchmarks.  相似文献   

11.
This paper describes the design and implementation of an optimizing compiler that automatically generates profile information to assist classic code optimizations. This compiler contains two new components, an execution profiler and a profile-based code optimizer, which are not commonly found in traditional optimizing compilers. The execution profiler inserts probes into the input program, executes the input program for several inputs, accumulates profile information and supplies this information to the optimizer. The profile-based code optimizer uses the profile information to expose new optimization opportunities that are not visible to traditional global optimization methods. Experimental results show that the profile-based code optimizer significantly improves the performance of production C programs that have already been optimized by a high-quality global code optimizer.  相似文献   

12.
OpenSmalltalk-VM is a virtual machine (VM) for languages in the Smalltalk family (eg, Squeak and Pharo), which is itself written in a subset of Smalltalk that can easily be translated to C. VM development is done in Smalltalk, an activity we call “simulation.” The production VM is then derived by translating the core VM code to C. As a result, two execution models coexist: simulation, where the Smalltalk code is executed on top of a Smalltalk VM, and production, where the same code is compiled to an executable through a C compiler. The whole VM execution can be simulated: the heap is represented as a huge byte array, the VM code is executed as Smalltalk, and the native code generated by the just-in-time (JIT) compiler is executed by a processor simulator. All the Smalltalk development tools, such as the debugger, are then available while simulating. In addition, in simulation, it is also possible to use debugging features such as single stepping in the machine code generated by the JIT compiler. The Smalltalk development tools combined with the simulation debugging features provide developers with a productive environment in which to extend and debug the VM. In this article, we detail the VM simulation infrastructure and report our experiences developing and debugging VM features within it such as the garbage collector and the JIT compiler.  相似文献   

13.
Predicated execution is a promising architectural feature for exploiting instruction-level parallelism in the presence of control flow. Compiling for predicated execution involves converting program control flow into conditional, or predicated, instructions. This process is known as if-conversion. In order to apply ifconversion effectively, one must address two major issues: what should be ifconverted and when the if-conversion should be performed. A compiler's use of predication as a representation is most effective when large amounts of code are if-converted and when if-conversion is performed early in the compilation procedure. On the other hand, efficient execution of code generated for a processor with predicated execution requires a delicate balance between control flow and predication. The appropriate balance is tightly coupled with scheduling decisions and detailed processor characteristics. This paper presents a compilation framework based on partial reverse if-conversion that allows the compiler to maximize the benefits of predication as a compiler representation while delaying the final balancing of control flow and predication to schedule time.  相似文献   

14.
Parallel languages allow the programmer to express parallelism at a high level. The management of parallelism and the generation of interprocessor communication is left to the compiler and the runtime system. This approach to parallel programming is particularly attractive if a suitable widely accepted parallel language is available. High Performance Fortran (HPF) has emerged as the first popular machine independent parallel language, and remarkable progress has been made towards compiling HPF efficiently. However, the performance of HPF programs is often poor and unpredictable, and obtaining adequate performance is a major stumbling block that must be overcome if HPF is to gain widespread acceptance. The programmer is often in the dark about how to improve the performance of an HPF program since poor performance can be attributed to a variety of reasons, including poor choice of algorithm, limited use of parallelism, or an inefficient data mapping. This paper presents a profiling tool that allows the programmer to identify the regions of the program that execute inefficiently, and to focus on the potential causes of poor performance. The central idea is to distinguish the code that is executing efficiently from the code that is executing poorly. Efficient code uses all processors of a parallel system to make progress, while inefficient code causes processors to wait, execute replicated code, idle, communicate, or perform compiler bookkeeping. We designate the latter code as non-scalable, since adding more processors generally does not lead to improved performance for such code. By analogy, the former code is called scalable. The tool presented here separates a program into scalable and non-scalable components and identifies the causes of non-scalability of different components. We show that compiler information is the key to dividing the execution times into logical categories that are meaningful to the programmer. We present the design and implementation of a profiler that is integrated with Fx, a compiler for a variant of HPF. The paper includes two examples that demonstrate how the data reported by the profiler are used to identify and resolve performance bugs in parallel programs. © 1997 John Wiley & Sons, Ltd.  相似文献   

15.
We present a programming language called TCEL (Time-Constrained Event Language), whose semantics are based on time-constrained relationships between observable events. Such a semantics infers only those timing constraints necessary to achieve real-time correctness, without overconstraining the system. Moreover, an optimizing compiler can exploit this looser semantics to help tune the code, so that its worst-case execution time is consistent with its real-time requirements. In this paper we describe such a transformation system, which works in two phases. First, the TCEL source code is translated into an intermediate representation. Then an instruction-scheduling algorithm rearranges selected unobservable operations and synthesizes tasks guaranteed to respect the original event-based constraints  相似文献   

16.
This paper presents a new method that can be applied by a parallelizing compiler to find, without user intervention, the iteration and data decompositions that minimize communication and load imbalance overheads in parallel programs targeted at NUMA architectures. One of the key ingredients in our approach is the representation of locality as a locality-communication graph (ICG) and the formulation of the compiler technique as a mixed integer nonlinear programming (MINLP) optimization problem on this graph. The objective function and constraints of the optimization problem model communication costs and load imbalance. The solution to this optimization problem is a decomposition that minimizes the parallel execution overhead. This paper summarizes the process of how the compiler extracts the locality information from a nonannotated code and focuses on how this compiler can derive the optimization problem, solve it, and generate the parallel code with the automatically selected iteration and data distributions. In addition, we include a discussion about our model and the solutions - the decompositions - that it provides. The approach presented in the paper is evaluated using several benchmarks. The experimental results demonstrate that the MINLP formulation does not increase compilation time significantly and that our framework generates very efficient iteration/data distributions for a variety of NUMA machines.  相似文献   

17.
Flash Sheridan 《Software》2007,37(14):1475-1488
A simple technique is presented for testing a C99 compiler, by comparing its output with the output from pre‐existing tools. The advantage to this approach is that new test cases can be added in bulk from existing sources, reducing the need for in‐depth investigation of correctness issues and for creating new test code by hand. This technique was used in testing the PalmSource Palm OS® Cobalt ARM C/C++ cross‐compiler for Palm‐Powered® personal digital assistants, primarily for standards compliance and the correct execution of generated code. The technique described here found several hundred bugs, mostly in our in‐house code, but also in longstanding high‐quality front‐ and back‐end code from Edison Design Group and Apogee Software. It also found 18 bugs in the GNU C compiler, as well as a bug specific to the Apple version of GCC, a bug specific to the Suse version of GCC, and a dozen bugs in versions of GCC for the ARM processor, several of which were critical. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

18.
针对基于虚拟机机制的软件PLC可移植性差,执行效率低等不足,研究基于嵌入式机器码的软件PLC系统,通过梯形图编译器、代码解析生成器、汇编编译器等处理,将用户开发的逻辑程序直接编译成能够在CPU环境下执行的嵌入式机器码,该方法减少PLC虚拟指令执行过程,提高软件PLc执行效率.  相似文献   

19.
Smalltalk-80 (hereafter referred to as Smalltalk), which is one of the most productive programming languages/environments, is very well suited for prototyping of applications but it is less well suited for delivering applications because applications can neither run in isolation from the Smalltalk environment nor be combined with other programs written in other languages. One way to make Smalltalk suitable for delivering applications is to translate Smalltalk into a compiler language such as C. By translating Smalltalk code into portable and interoperable C code, it is possible to deliver a stand-alone version of Smalltalk applications, and to develop an application partly in Smalltalk and partly in C. However, there are some difficulties in translating Smalltalk code into such a C code. First, the execution model of Smalltalk, which creates activation records as objects, is very different from that of C, and second, Smalltalk and C have very different approaches to storage management. We have implemented SPiCE, a system for translating Smalltalk into C. Our approach to the translation is to create runtime replacement classes implementing the same functionality of Smalltalk classes that are inherently part of the Smalltalk execution model, and to provide semi-conservative real-time compacting garbage collection that works without language support  相似文献   

20.
In this paper we present a new environment called MERPSYS that allows simulation of parallel application execution time on cluster-based systems. The environment offers a modeling application using the Java language extended with methods representing message passing type communication routines. It also offers a graphical interface for building a system model that incorporates various hardware components such as CPUs, GPUs, interconnects and easily allows various formulas to model execution and communication times of particular blocks of code. A simulator engine within the MERPSYS environment simulates execution of the application that consists of processes with various codes, to which distinct labels are assigned. The simulator runs one Java thread per label and scales computations and communication times adequately. This approach allows fast coarse-grained simulation of large applications on large-scale systems. We have performed tests and verification of results from the simulator for three real parallel applications implemented with C/MPI and run on real HPC clusters: a master-slave code computing similarity measures of points in a multidimensional space, a geometric single program multiple data parallel application with heat distribution and a divide-and-conquer application performing merge sort. In all cases the simulator gave results very similar to the real ones on configurations tested up to 1000 processes. Furthermore, it allowed us to make predictions of execution times on configurations beyond the hardware resources available to us.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号