首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
XML documents generated dynamically by programs are typically represented as text strings or DOM trees. This is a low-level approach for several reasons: 1) traversing and modifying such structures can be tedious and error prone, 2) although schema languages, e.g., DTD, allow classes of XML documents to be defined, there are generally no automatic mechanisms for statically checking that a program transforms from one class to another as intended. We introduce XACT, a high-level approach for Java using XML templates as a first-class data type with operations for manipulating XML values based on XPath. In addition to an efficient runtime representation, the data type permits static type checking using DTD schemas as types. By specifying schemes for the input and output of a program, our analysis algorithm will statically verify that valid input data is always transformed into valid output data and that the operations are used consistently.  相似文献   

2.
We propose FMJ (Featherweight Multi Java), an extension of Featherweight Java with encapsulated multi-methods thus providing dynamic overloading. Multi-methods (collections of overloaded methods associated to the same message, whose selection takes place dynamically instead of statically as in standard overloading) are a useful and flexible mechanism which enhances re-usability and separation of responsibilities. However, many mainstream languages, such as, e.g., Java, do not provide it, resorting to only static overloading.The proposed extension is conservative and type safe: both “message-not-understood” and “message-ambiguous” are statically ruled out. Possible ambiguities are checked during type checking only on method invocation expressions, without requiring to inspect all the classes of a program. A static annotation with type information guarantees that in a well-typed program no ambiguity can arise at run-time. This annotation mechanism also permits modeling static overloading in a smooth way.Our core language can be used as the formal basis for an actual implementation of dynamic (and static) overloading in Java-like languages.  相似文献   

3.
Type systems built directly into the compiler or interpreter of a programming language cannot be easily extended to keep track of run-time invariants of new abstractions. Yet, programming with domain-specific abstractions could benefit from additional static checking. This paper presents library techniques for extending the type system of C++ to support domain-specific abstractions. The main contribution is a programmable “subtype” relation. As a demonstration of the techniques, we implement a type system for defining type qualifiers in C++, as well as a type system for the XML processing language, capable of, e.g., statically guaranteeing that a program only produces valid XML documents according to a given XML schema.  相似文献   

4.
将Java程序静态编译成可执行程序是使用Java虚拟机动态编译/解释执行Java程序的另一种运行Java程序的方式。针对Java异常机制的特点和静态编译的需求,在介绍Java异常处理逻辑的基础上,提出一种在静态编译器中实现Java异常机制的算法,结合Open64开源编译器,给出该算法的具体步骤以及实现方式,以SPECjvm98为测试集,验证该算法的有效性。  相似文献   

5.
PCC的数组边界检查存在着由于无法确定数组下标表达式符号值的范围,而造成拒绝执行一些安全的移动代码等问题。本文给出的一种数组边界检查的优化及生成算法,不仅能够比较好地解决了这一问题,同时还生成了循环不变式注解中的条件谓词。我们设计的编译器——认证编译器——已经实现了这些算法,并完成了从用C编程语言的类型安全子集编写的源程序到携带注解的Intelx86/linux汇编语言程序的编译过程。由于基于语言安全策略系统的证明是建立在携带注解的代码基础之上的,因此该认证编译器中实现的算法在移动代码安全检查中非常有用。  相似文献   

6.
APL in its present interpretive implementations provides an ideal tool for the development of programs. The interpretive overhead, however, often prohibits the running of large production programs. Our results indicate that the overhead involved in checking type and size compatibility of the arguments of APL primitive functions during execution can be substantially avoided without loss of error detection. Using a large sample of syntactically correct APL programs, a statistical analysis was performed on the required conformability checks for each function. We have compiled (for each type of conformability check in each program) statistics on the percentage of checks that could be eliminated by static analysis and verification. The programs analysed contained 1,454 primitive functions. Of the 2,412 rank, length, domain and value checks required by a ‘naïve’ interpreter, 1,966 (80 per cent) can be validated statically. Index checks were required only thirty-one times, but essentially all must be evaluated at run time. This suggests that compilation of APL programs to conventional target codes may be feasible as a high performance implementation whenever the cost of such translation does not dominate.  相似文献   

7.
The programming of efficient parallel software typically requires extensive experimentation with program prototypes. To facilitate such experimentation, any programming system that supports rapid prototyping of parallel programs should provide high-level language primitives with which programs can be explicitly, statically, or dynamically tuned with respect to performance and reliability. Such language primitives should be able to refer conveniently to the information about the executing program and the parallel hardware required for tuning. Such information may include monitoring data about the current or previous program or even hints regarding appropriate tuning decisions. Language primitives and an associated programming system for program tuning are presented. The primitives and system have been implemented, and have been tested with several parallel applications on a network of Unix workstations.<>  相似文献   

8.
This paper shows how to integrate two complementary techniques for manipulating program invariants: dynamic detection and static verification. Dynamic detection proposes likely invariants based on program executions, but the resulting properties are not guaranteed to be true over all possible executions. Static verification checks that properties are always true, but it can be difficult and tedious to select a goal and to annotate programs for input to a static checker. Combining these techniques overcomes the weaknesses of each: dynamically detected invariants can annotate a program or provide goals for static verification, and static verification can confirm properties proposed by a dynamic tool.We have integrated a tool for dynamically detecting likely program invariants, Daikon, with a tool for statically verifying program properties, ESC/Java. Daikon examines run-time values of program variables; it looks for patterns and relationships in those values, and it reports properties that are never falsified during test runs and that satisfy certain other conditions, such as being statistically justified. ESC/Java takes as input a Java program annotated with preconditions, postconditions, and other assertions, and it reports which annotations cannot be statically verified and also warns of potential runtime errors, such as null dereferences and out-of-bounds array indices.Our prototype system runs Daikon, inserts its output into code as ESC/Java annotations, and then runs ESC/Java, which reports unverifiable annotations. The entire process is completely automatic, though users may provide guidance in order to improve results if desired. In preliminary experiments, ESC/Java verified all or most of the invariants proposed by Daikon.  相似文献   

9.
One approach to model checking program source code is to view a model checker as a target machine. In this setting, program source code is translated to a model checker’s input language using a process that shares much in common with program compilation. For example, well-defined intermediate program representations are used to stage the translation through a series of analyses and optimizing transformations and target-specific details are isolated in code generation modules.In this paper, we present the Bandera Intermediate Representation (BIR)—a guarded-assignment transformation system language that has been designed to support the translation of Java programs to a variety of model checkers. BIR includes constructs, such as inheritance, dynamic creation of data, and locking primitives, that are designed to model the semantics of Java primitives. BIR also includes several non-deterministic choice constructs that support abstraction in modeling and specification of properties of dynamic heap structures.We have developed a BIR-based tool infrastructure that has been applied to develop customized analysis frameworks for several different input languages using different model checking tools. We present BIR’s type system and operational semantics in sufficient detail to support similar applications by other researchers. This semantics details several state space reductions and state space search variations. We describe the translation of Java to BIR and how BIR is translated to the input languages of several model checkers.  相似文献   

10.
Whenever an array element is accessed, Java virtual machines execute a compare instruction to ensure that the index value is within the valid bounds. This reduces the execution speed of Java programs. Array bounds check elimination identifies situations in which such checks are redundant and can be removed. We present an array bounds check elimination algorithm for the Java HotSpot? VM based on static analysis in the just-in-time compiler.The algorithm works on an intermediate representation in static single assignment form and maintains conditions for index expressions. It fully removes bounds checks if it can be proven that they never fail. Whenever possible, it moves bounds checks out of loops. The static number of checks remains the same, but a check inside a loop is likely to be executed more often. If such a check fails, the executing program falls back to interpreted mode, avoiding the problem that an exception is thrown at the wrong place.The evaluation shows a speedup near to the theoretical maximum for the scientific SciMark benchmark suite and also significant improvements for some Java Grande benchmarks. The algorithm slightly increases the execution speed for the SPECjvm98 benchmark suite. The evaluation of the DaCapo benchmarks shows that array bounds checks do not have a significant impact on the performance of object-oriented applications.  相似文献   

11.
A certifying compiler takes a source language program and produces object code, as well as a certificate that can be used to verify that the object code satisfies desirable properties, such as type safety and memory safety. Certifying compilation helps to increase both compiler robustness and program safety. Compiler robustness is improved since some compiler errors can be caught by checking the object code against the certificate immediately after compilation. Program safety is improved because the object code and certificate alone are sufficient to establish safety: even if the object code and certificate are produced on an unknown machine by an unknown compiler and sent over an untrusted network, safe execution is guaranteed as long as the code and certificate pass the verifier.Existing work in certifying compilation has addressed statically generated code. In this paper, we extend this to code generated at run time. Our goal is to combine certifying compilation with run-time code generation to produce programs that are both fast and verifiably safe. To achieve this goal, we present two new languages with explicit run-time code generation constructs: Cyclone, a type safe dialect of C, and TAL/T, a type safe assembly language. We have designed and implemented a system that translates a safe C program into Cyclone, which is then compiled to TAL/T, and finally assembled into executable object code. This paper focuses on our overall approach and the front end of our system; details about TAL/T will appear in a subsequent paper.  相似文献   

12.
Type checking is considered an important mechanism for detecting programming errors, especially interface errors. This report describes an experiment to assess the defect-detection capabilities of static, intermodule type checking. The experiment uses ANSI C and Kernighan & Ritchie (K&R) C. The relevant difference is that the ANSI C compiler checks module interfaces (i.e., the parameter lists calls to external functions), whereas K&R C does not. The experiment employs a counterbalanced design in which each of the 40 subjects, most of them CS PhD students, writes two nontrivial programs that interface with a complex library (Motif). Each subject writes one program in ANSI C and one in K&R C. The input to each compiler run is saved and manually analyzed for defects. Results indicate that delivered ANSI C programs contain significantly fewer interface defects than delivered K&R C programs. Furthermore, after subjects have gained some familiarity with the interface they are using, ANSI C programmers remove defects faster and are more productive (measured in both delivery time and functionality implemented)  相似文献   

13.
The program transformation principle called partial evaluation has interesting applications in compilation and compiler generation. Self-applicable partial evaluators may be used for transforming interpreters into corresponding compilers and even for the generation of compiler generators. This is useful because interpreters are significantly easier to write than compilers, but run much slower than compiled code. A major difficulty in writing compilers (and compiler generators) is the thinking in terms of distinct binding times: run time and compile time (and compiler generation time). The paper gives an introduction to partial evaluation and describes a fully automatic though experimental partial evaluator, called mix, able to generate stand-alone compilers as well as a compiler generator. Mix partially evaluates programs written in Mixwell, essentially a first-order subset of statically scoped pure Lisp. For compiler generation purposes it is necessary that the partial evaluator be self-applicable. Even though the potential utility of a self-applicable partial evaluator has been recognized since 1971, a 1984 version of mix appears to be the first successful implementation. The overall structure of mix and the basic ideas behind its way of working are sketched. Finally, some results of using a version of mix are reported.  相似文献   

14.
A bytecode verifier for the Java virtual machine language (JVML) statically checks that bytecode does not cause any fatal error. However, the present verifier does not check correctness of the usage of lock primitives. To solve this problem, we extend Stata and Abadi’s type system for JVML by augmenting types with information about how each object is locked and unlocked. The resulting type system guarantees that when a thread terminates, it has released all the locks it has acquired and that a thread releases a lock only if it has acquired the lock previously. We have implemented a prototype Java bytecode verifier based on the type system. We have tested the verifier for several classes in the Java run time library and confirmed that the verifier runs efficiently and gives correct answers.
Naoki KobayashiEmail:
  相似文献   

15.
In this paper we present the motivation, theory and transformations of our semantics-directed compiler generator. The main novelty of our generator is that it generates compilers and abstract machines. The execution times of the abstract machine programs produced by our generated compiler compare well to those of target programs produced by compilers generated by other semantics-directed generators. The generated specifications of compilers and abstract machines are suitable as a starting point for handwriting compilers and abstract machines. Our generator is fully automated and its core transformations are proved correct. Received May 1997 / Accepted in revised form May 2000  相似文献   

16.
J% is an extension of the Java programming language that efficiently supports the integration of domain-specific languages. In particular, J% allows the embedding of domain-specific language code into Java programs in a syntax-checked and type-safe manner. This paper presents J%׳s support for the sql language. J% checks the syntax and semantics of sql statements at compile-time. It supports query validation against a database schema or through execution to a live database server. The J% compiler generates code that uses standard jdbc api calls, enhancing runtime efficiency and security against sql injection attacks.  相似文献   

17.
Most large software applications rely on an external relational database for storing and managing persistent data. Typically, such applications interact with the database by first constructing strings that represent SQL statements, and then submitting these for execution by the database engine. The fact that these statements are only checked for correctness at runtime is a source for many potential defects, including type and syntax errors and vulnerability to injection attacks.The AraRat system presented here offers a method for dealing with these difficulties by coercing the host C++ compiler to do the necessary checks of the generated strings. A library of templates and preprocessor directives is used to embed in C++ a little language representing an augmented relational algebra formalism. Type checking of this embedded language, carried out by our template library, assures, at compile-time, the correctness and safety of the generated SQL strings. All SQL statements constructed by AraRat are guaranteed to be syntactically correct, and type safe with respect to the database schema. Moreover, AraRat statically ensures that the generated statements are immune to all injection attacks.The standard techniques of “expression templates” and “compile-time symbolic derivation” for compile-time representation of symbolic structures, are enhanced in our system. We demonstrate the support of a type system and a symbol table lookup of the symbolic structure. A key observation of this work is that type equivalence of instantiated nominally typed generics in C++ (as well as other languages, e.g., Java) is structural rather than nominal. This makes it possible to embed the structural type system, characteristic to persistent data management, in the nominal type system of C++.For some of its advanced features, AraRat relies on two small extensions to the standard C++ language: the typeof pseudo operator and the __COUNTER__ preprocessor macro.  相似文献   

18.
The array programming paradigm adopts multidimensional arrays as the fundamental data structures of computation. Array operations process entire arrays instead of just single elements. This makes array programs highly expressive and introduces data parallelism in a natural way. Array programming imposes non-trivial structural constraints on ranks, shapes, and element values of arrays. A prominent example where such constraints are violated are out-of-bound array accesses. Usually, such constraints are enforced by means of run time checks. Both the run time overhead inflicted by dynamic constraint checking and the uncertainty of proper program evaluation are undesirable.We propose a novel type system for array programs based on dependent types. Our type system makes dynamic constraint checks obsolete and guarantees orderly evaluation of well-typed programs. We employ integer vectors of statically unknown length to index array types. We also show how constraints on these vectors are resolved using a suitable reduction to integer scalars. Our presentation is based on a functional array calculus that captures the essence of the paradigm without the legacy and obfuscation of a fully-fledged array programming language.  相似文献   

19.
We study issues in verifying compilers for modern imperative and object-oriented languages. We take the view that it is not the compiler but the code generated by it which must be correct. It is this subtle difference that allows for reusing standard compiler architecture, construction methods and tools also in a verifying compiler.Program checking is the main technique for avoiding the cumbersome task of verifying most parts of a compiler and the tools by which they are generated. Program checking remaps the result of a compiler phase to its origin, the input of this phase, in a provably correct manner. We then only have to compare the actual input to its regenerated form, a basically syntactic process. The correctness proof of the generation of the result is replaced by the correctness proof of the remapping process. The latter turns out to be far easier than proving the generating process correct.The only part of a compiler where program checking does not seem to work is the transformation step which replaces source language constructs and their semantics, given, e.g., by an attributed syntax tree, by an intermediate representation, e.g., in SSA-form, which is expressing the same program but in terms of the target machine. This transformation phase must be directly proven using Hoare logic and/or theorem-provers. However, we can show that given the features of today's programming languages and hardware architectures this transformation is to a large extent universal: it can be reused for any pair of source and target language. To achieve this goal we investigate annotating the syntax tree as well as the intermediate representation with constraints for exhibiting specific properties of the source language. Such annotations are necessary during code optimization anyway.  相似文献   

20.
Early Java implementations relied on interpretation,leading to poor performance compared to compiled programs,Java just-in-time(JIT) compiler can compile Java programs at runtime,so it not only improves Java‘s performance prominently,but also preserves Java‘s portability.In this paper the design and implementing techniques of Java JIT complier based on Chinese open system are discussed in detail.To enhance the portability,a translating method which combines the static simulating method and macro expansion method is adopted.The optimization technique for JIT compiler is also discussed and a way to evaluate the hotspots in Java programs is presented.Experiments have been conducted to verify JIT compilation technique as an efficient way to accelerate Java.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号