首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
过程间数据流分析在软件优化,软件维护,软件测试中均有大量的应用。在编写使用可重用组件的软件时,对整个程序进行数据流分析的方法效率低下甚至由于没有库的源代码而不能直接分析。本文是在已有的组件库上构建新组件时进行数据流分析,通过计算新建库的概要信息,并使用这些概要信息分析新的组件,这样使用库中预先计算的概要函数能够在较小的分析成本下构建可扩展的大的库组件。  相似文献   

3.
专用处理器,如DSP等,因主要支持特定应用,其指令集往往只支持有限的数据类型。在采用高级语言为其编程时,若采用了处理器不支持的奇异数据类型,编译器必须在保持语义的前提下将其转化为处理器支持的一段指令。该文提出了一种在VLIW DSP编译器中实现对奇异数据类型的处理的方法,包括对含有奇异数据类型的中间代码的注释、调度依赖关系的计算、寄存器分配的改进。该类方法对编译器的改动相对较小,效率较高。  相似文献   

4.
Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined in ATLAS by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empirically measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to select library calls and choose data layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.  相似文献   

5.
数据流分析算法可以分类成流敏感和流非敏感两类。为了提高效率,流非敏感的跨过程分析没有利用与每个过程相关的过程内的控制流信息。文中,提出了一种流非敏感的数据流分析算法来计算指针引起的跨过程的别名问题。通过如下方法来提高了分析的精度:利用某些特定类型的注销(kill)信息,这些信息可以提前计算;计算每个过程内产生的别名信息,而不是只计算每个过程的出口点产生的别名信息。  相似文献   

6.
Many novel computer architectures like array and multiprocessors which achieve high performance through the use of concurrency exploit variations of the von Neumann model of computation. The effective utilization of the machines makes special demands on programmers and their programming languages, such as the structuring of data into vectors or the partitioning of programs into concurrent processes. In comparison, the data flow model of computation demands only that the principle of structured programming be followed. A data flow program, often represented as a data flow graph, is a program that expresses a computation by indicating the data dependencies among operators. A data flow computer is a machine designed to take advantage of concurrency in data flow graphs by executing data independent operations in parallel. In this paper, we discuss the design of a high level language (DFL: Data Flow Language) suitable for data flow computers. Some sample procedures in DFL are presented. The implementation aspects have not been discussed in detail since there are no new problems encountered. The language DFL embodies the concepts of functional programming, but in appearance closely resembles Pascal. The language is a better vehicle than the data flow graph for expressing a parallel algorithm. The compiler has been implemented on a DEC 1090 system in Pascal.  相似文献   

7.
With the development of sensor technology and embedded systems, building large-scale, low-cost sensor networks, which is a critical step to facilitating the application of pervasive sensing in the future, becomes possible. One of the major challenges in developing sensor network applications is to improve the execution efficiency of programs running on power-constrained embedded devices. While profiling-guided code optimization has been widely used as a compiler-level optimization technique for improving the performance of programs running on general-purpose computers, it has not been applied to sensor network programs due to some defects. In this paper, we overcome these defects and design a more effective profiling-guided code placement approach for sensor network programs. Specifically, we model the execution of sensor network programs taking nondeterministic inputs as discrete-time Markov processes, and propose a novel approach named Code Tomography to estimate parameters of the Markov models that reflect sensor network programs’ dynamic execution behaviors by only using end-to-end timing information measured at the start and end points of each procedure in the source code. The parameters estimated by Code Tomography are fed back to compilers to optimize the code placement. The evaluation results demonstrate that Code Tomography can achieve satisfactory estimation accuracy with low profiling overhead and the branch misprediction rate can be reduced after reorganizing the code placement based on the profiling results. Besides, Code Tomography can also be useful for purposes such as post-mortem analysis, debugging and energy profiling of sensor network programs.  相似文献   

8.
代码干扰变换在软件保护中的使用   总被引:2,自引:0,他引:2  
阐述了两种代码干扰变换:基于不透明谓词基础上变换、降级高级控制结构的变换.它们是对软件源代码做保持语义的变换,使得软件的反向工程实现困难,从而保护软件知识产权.  相似文献   

9.
A set of methods for interprocedural analysis is proposed.First,an approach for interprocedural constant propagation is given.Then the concept of constant propagation is extended so as to meet the needs of data dependence analysis.Besides certain constant,constant range can also be propagated.The related propagating rules are introduced,and an idea for computing Return function is given.This approach can solve almost all interprocedural constant propagation problems with non-recursive calls.Second,a multiple-version parallelizing technique is also proposed for alias problem.The work related to this paper has been implemented on a shared-memory parallel computer.  相似文献   

10.
This paper describes the design and implementation of an optimizing compiler that automatically generates profile information to assist classic code optimizations. This compiler contains two new components, an execution profiler and a profile-based code optimizer, which are not commonly found in traditional optimizing compilers. The execution profiler inserts probes into the input program, executes the input program for several inputs, accumulates profile information and supplies this information to the optimizer. The profile-based code optimizer uses the profile information to expose new optimization opportunities that are not visible to traditional global optimization methods. Experimental results show that the profile-based code optimizer significantly improves the performance of production C programs that have already been optimized by a high-quality global code optimizer.  相似文献   

11.
GAGAN AGRAWAL  JOEL SALTZ 《Software》1997,27(5):519-545
Data parallel languages like High Performance Fortran (HPF) are emerging as the architecture independent mode of programming distributed memory parallel machines. In this paper, we present the interprocedural optimizations required for compiling applications having irregular data access patterns, when coded in such data parallel languages. We have developed an Interprocedural Partial Redundancy Elimination (IPRE) algorithm for optimized placement of runtime preprocessing routine and collective communication routines inserted for managing communication in such codes. We also present two new interprocedural optimizations: placement of scatter routines and use of coalescing and incremental routines. We then describe how program slicing can be used for further applying IPRE in more complex scenarios. We have done a preliminary implementation of the schemes presented here using the Fortran D compilation system as the necessary infrastructure. We present experimental results from two codes compiled usng our system to demonstrate the efficacy of the presented schemes. ©1997 John Wiley & Sons, Ltd.  相似文献   

12.
Crisp input and output data are fundamentally indispensable in traditional data envelopment analysis (DEA). However, the input and output data in real-world problems are often imprecise or ambiguous. Some researchers have proposed interval DEA (IDEA) and fuzzy DEA (FDEA) to deal with imprecise and ambiguous data in DEA. Nevertheless, many real-life problems use linguistic data that cannot be used as interval data and a large number of input variables in fuzzy logic could result in a significant number of rules that are needed to specify a dynamic model. In this paper, we propose an adaptation of the standard DEA under conditions of uncertainty. The proposed approach is based on a robust optimization model in which the input and output parameters are constrained to be within an uncertainty set with additional constraints based on the worst case solution with respect to the uncertainty set. Our robust DEA (RDEA) model seeks to maximize efficiency (similar to standard DEA) but under the assumption of a worst case efficiency defied by the uncertainty set and it’s supporting constraint. A Monte-Carlo simulation is used to compute the conformity of the rankings in the RDEA model. The contribution of this paper is fourfold: (1) we consider ambiguous, uncertain and imprecise input and output data in DEA; (2) we address the gap in the imprecise DEA literature for problems not suitable or difficult to model with interval or fuzzy representations; (3) we propose a robust optimization model in which the input and output parameters are constrained to be within an uncertainty set with additional constraints based on the worst case solution with respect to the uncertainty set; and (4) we use Monte-Carlo simulation to specify a range of Gamma in which the rankings of the DMUs occur with high probability.  相似文献   

13.
The current work of the authors in the area of software tools for automatic construction of compilers is described. This focuses on attempts to provide for automatic production of the semantic-analysis and intermediate-code-generation parts of the Cigale compiler-writing system, developed at the University of Nice. This work relies on use of the Amsterdam Compiler Kit (ACK) to ensure a full set of optimizers and code generators based on a semi-universal intermediate language, and, therefore, emphasizes the filling of the gap between parsing and the intermediate language. It is intended as a pragmatic contribution to the automation of the production of true compilers (rather than mere program evaluators) that generate efficient machine code.  相似文献   

14.
This paper describes the implementation of a LIS compiler for GCOS-7. LIS is a high level system implementation language developed at CII-Honeywell Bull during the middle 1970s, and experience with the language and its implementation have largely influenced the design of Ada. The design of the compiler was particularly aimed at efficient code generation. Design decisions concerning the run-time organization in relation to procedure call and separate compilation are discussed. The structure of the compiler is described. The articulation between the different phases of the code generator is emphasized. Experience with the bootstrap is related.  相似文献   

15.
The program analyzer generator PAG described in this paper attempts to offer the best of both worlds, specification languages based on the clean theory of abstract interpretation and efficient implementation methods from the theory of data flow analysis. PAG has a high level functional input language to specify data flow analyses. It offers the generation of complex data structures and is therefore not limited to bit vector problems. PAG generated interprocedural analyzers can be easily integrated into existing compilers. PAG has successfully been used in the ESPRIT project COMPARE to generate several analyzers (including alias analysis and constant propagation) for industrial quality ANSI-C and Fortran90 compilers, and is now marketed by the spin-off company AbsInt.A simplified version of PAG can be interactively tested over the Web.  相似文献   

16.
Modern compilers present a great and ever increasing number of options which can modify the features and behavior of a compiled program. Many of these options are often wasted due to the required comprehensive knowledge about both the underlying architecture and the internal processes of the compiler. In this context, it is usual, not having a single design goal but a more complex set of objectives. In addition, the dependencies between different goals are difficult to be a priori inferred. This paper proposes a strategy for tuning the compilation of any given application. This is accomplished by using an automatic variation of the compilation options by means of multi-objective optimization and evolutionary computation commanded by the NSGA-II algorithm. This allows finding compilation options that simultaneously optimize different objectives. The advantages of our proposal are illustrated by means of a case study based on the well-known Apache web server. Our strategy has demonstrated an ability to find improvements up to 7.5% and up to 27% in context switches and L2 cache misses, respectively, and also discovers the most important bottlenecks involved in the application performance.  相似文献   

17.
邓定胜 《计算机科学》2015,42(2):191-197,223
针对现有磁盘功率管理方法无法处理高能耗并行应用的短空闲周期、大规模的代码更改和能耗开销较大等问题,提出了一种面向编译器的数据访问调度技术。该技术包括两个阶段:在第一阶段,编译器分析并行应用程序,提取磁盘访问模式,然后生成调度表;在第二阶段,"数据访问调度器"根据调度表进行数据访问。与先前基于软件的策略相比,所提方法不需要改变代码或数据结构。实验评估结果表明,对于数据密集型工作负载,所提方法可以有效提升节能效果,节能率从5.5%上升到11.8%,从而增加了磁盘降速策略对数据密集型高性能计算的可行性。此外,它也将多速磁盘的节能效果从12.7%增加到了27.6%。  相似文献   

18.
编译优化技术的目的是挖掘程序中的优化空间,提高程序编译或运行效率,无效代码删除优化是被广泛使用的编译优化技术之一,它旨在删除程序中不可达的代码,以提升程序的执行效率.许多应用程序的执行路径往往与运行时的输入参数值相关,并且在一些分支路径上与运行时参数值相结合,可能存在无效代码,通过现有的无效代码删除优化,很难做出优化处...  相似文献   

19.
Earlier work has shown the effectiveness of hand-applied program transformations optimizing high-level interprocess communication mechanisms. This paper describes the static analysis techniques necessary to ensure correct compiler application of the optimizing transformations. These techniques include both dataflow analysis and interprocess analysis. This paper focuses on the analysis of communication mechanisms within program modules; however, the analysis techniques can be generalized to handle inter-module optimization analysis as well. The major contributions of this paper include the application of dataflow analysis and the extension of interprocedural analysis—interprocess analysis—to real concurrent programming languages and, more specifically, to the optimization of interprocess communication and synchronization mechanisms that use both static and dynamic channels. In addition, the use of attribute grammars to perform interprocess analysis is significant. This paper also describes an implementation of both intra-process dataflow and interprocess analysis techniques using attribute grammars.This work was supported by NSF under Grant Number CCR88-10617.  相似文献   

20.
随着应用的深入,计算系统对性能的要求越来越高。另一方面,软件规模也越来越大,使得日益庞大的软件与有限的硬件资源之间的矛盾逐渐显现出来。在嵌入式系统、移动计算以及实时控制系统中,这个矛盾尤其突出。如何减少代码、提高代码的效率,成为近年来学术界和产业界关注的问题,许多组织和机构正围绕着此论题开展广泛而深入的研究。本文介绍代码缩减(code-size reduction)的研究背景,以及两种主要的代码缩减方法——代码压缩(code compression)和代码紧缩(code compaction);着重讨论代码紧缩技术,包括:代码肾缩的主要方法、各个方法的特点及其中的关键技术;分析代码特缩技术尚存在的问题和面临的挑战,并对代码肾缩技术的未来发展趋势做了一些预测。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号