首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Virtual execution environments, such as the Java virtual machine, promote platform‐independent software development. However, when it comes to analyzing algorithm complexity and performance bottlenecks, available tools focus on platform‐specific metrics, such as the CPU time consumption on a particular system. Other drawbacks of many prevailing profiling tools are high overhead, significant measurement perturbation, as well as reduced portability of profiling tools, which are often implemented in platform‐dependent native code. This article presents a novel profiling approach, which is entirely based on program transformation techniques, in order to build a profiling data structure that provides calling‐context‐sensitive program execution statistics. We explore the use of platform‐independent profiling metrics in order to make the instrumentation entirely portable and to generate reproducible profiles. We implemented these ideas within a Java‐based profiling tool called JP. A significant novelty is that this tool achieves complete bytecode coverage by statically instrumenting the core runtime libraries and dynamically instrumenting the rest of the code. JP provides a small and flexible API to write customized profiling agents in pure Java, which are periodically activated to process the collected profiling information. Performance measurements point out that, despite the presence of dynamic instrumentation, JP causes significantly less overhead than a prevailing tool for the profiling of Java code. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

2.
Walter Binder 《Software》2006,36(6):615-650
This article presents a novel framework for the sampling‐based profiling of Java programs, which relies on program transformation techniques. We exploit bytecode instruction counting to regularly activate a user‐defined profiling agent, which processes the current call stack. This approach has several advantages, such as making the instrumentation entirely portable, generating reproducible profiles, and enabling a fine‐grained adjustment of the sampling rate. Our framework offers a flexible API to write portable profiling agents in pure Java. While the overhead due to our profiling scheme is comparable to the overhead caused by prevailing, timing‐based profilers, the resulting profiles are much more accurate. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

3.
Aspect-oriented programming (AOP) has been successfully applied to application code thanks to techniques such as Java bytecode instrumentation. Unfortunately, with existing AOP frameworks for Java such as AspectJ, aspects cannot be woven into the standard Java class library. This restriction is particularly unfortunate for aspects that would benefit from comprehensive aspect weaving with complete method coverage, such as profiling or debugging aspects. In this article we present MAJOR, a new tool for comprehensive aspect weaving, which ensures that aspects are woven into all classes loaded in a Java Virtual Machine, including those in the standard Java class library. MAJOR includes the pluggable module CARAJillo, which supports efficient access to a complete and customizable calling context representation. We validate our approach with three case studies. Firstly, we weave existing profiling aspects with MAJOR which otherwise would generate incomplete profiles. Secondly, we introduce an aspect for memory leak detection that also benefits from comprehensive weaving. Thirdly, we present an aspect subsuming the functionality of ReCrash, an existing tool based on low-level bytecode instrumentation techniques that generates unit tests to reproduce program failures. Our aspect-based tools are concisely implemented in a few lines of code, and leverage MAJOR and CARAJillo for comprehensive aspect weaving and for efficient access to calling context information.  相似文献   

4.
Hardware bytecode translation is a technique to improve the performance of the Java virtual machine (JVM), especially on the portable devices for which the overhead of dynamic compilation is significant. However, since the translation is done on a single bytecode basis, a naive implementation of the JVM generates frequent memory accesses for local variables which can be not only a performance bottleneck but also an obstacle for instruction folding. A solution to this problem is to add a small register file to the data path of the microprocessor which is dedicated for storing local variables. However, the effectiveness of such a local variable register file depends on the size and the local variable access behavior of the applications.In this paper, we analyze the local variable access behavior of various Java applications. In particular, we will investigate the fraction of local variable accesses that are covered by the register file of a varying size, which determines the chip area overhead and the operation speed. We also evaluate the effectiveness of the sliding register window for parameter passing in context of JVM and on-the-fly optimization of local variable to register file mapping.With two types of exceptions, a 16-entry register file achieves coverages of up to 98%. The first type of exception is represented by the SAXON XSLT processor for which the effect of cold miss is significant. Adding the sliding window feature to the register file for parameter passing turns 6.2-13.3% of total accesses from miss to hit to the register file for the SAXON with XSLTMark. The second type of exception is represented by the FFT, which accesses more than 16 local variables for most of method invocations. In this case, on-the-fly profiling is effective. The hit ratio of a 16-entry register file for the FFT is increased from 44% to 83% by an array of 8-bit counters.  相似文献   

5.
6.
In this paper, we propose a solution for a worst‐case execution time (WCET) analyzable Java system: a combination of a time‐predictable Java processor and a tool that performs WCET analysis at Java bytecode level. We present a Java processor, called JOP, designed for time‐predictable execution of real‐time tasks. The execution time of bytecodes, the instructions of the Java virtual machine, is known to cycle accuracy for JOP. Therefore, JOP simplifies the low‐level WCET analysis. A method cache, which fills whole Java methods into the cache, simplifies cache analysis. The WCET analysis tool is based on integer linear programming. The tool performs the low‐level analysis at the bytecode level and integrates the method cache analysis. An integrated data‐flow analysis performs receiver‐type analysis for dynamic method dispatches and loop‐bound analysis. Furthermore, a model checking approach to WCET analysis is presented where the method cache can be exactly simulated. The combination of the time‐predictable Java processor and the WCET analysis tool is evaluated with standard WCET benchmarks and three real‐time applications. The WCET friendly architecture of JOP and the integrated method cache analysis yield tight WCET bounds. Comparing the exact, but expensive, model checking‐based analysis of the method cache with the static approach demonstrates that the static approximation of the method cache is sufficiently tight for practical purposes. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
Calling context profiling fulfills programmers’ information needs to obtain a complete picture of a program’s inter- and intra-procedural control flow, which are important for workload characterization, debugging, profiling, program comprehension, and reverse engineering. Many existing calling context profilers for Java, however, resort to sampling or other incomplete instrumentation techniques; thus, they collect incomplete profiles only. In this article we present JP2, a new calling context profiler for the Java Virtual Machine, which collects profiles that are not only complete but also call-site aware; that is, JP2 is able to distinguish between multiple call sites within a single method. JP2 supports selective profiling of the dynamic extent of chosen methods and supports profiling of native method invocations. Moreover, produced profiles contain execution statistics at the level of individual basic blocks of code, thereby preserving the intra-procedural control flow of the profiled applications. We rigorously evaluate the overhead incurred by JP2. This overhead is acceptable in tasks such as workload characterization. JP2 is freely available under an Open Source license.  相似文献   

8.
9.
A portable CPU-management framework for Java   总被引:1,自引:0,他引:1  
The Java resource accounting framework, second edition (J-RAF2), is a portable CPU-management framework for Java environments. It is based on fully automated program-transformation techniques applied at the bytecode level and can be used with every standard Java virtual machine. J-RAF2 modifies applications, libraries, and the Java development kit itself to expose details regarding thread execution. This article focuses on the extensible runtime APIs, which are designed to let developers tailor management policies to their needs.  相似文献   

10.
Performance evaluation of embedded software is essential in an early development phase so as to ensure that the software will run on the embedded device's limited computing resources. The prevailing approaches either require the deployment of the software on the embedded target, which can be tedious and may be impossible in an early development phase, or rely on simulation, which can be very slow. In this article, we introduce a customizable cross‐profiling framework for embedded Java processors, including processors featuring a method cache. The developer profiles the embedded software in the host environment, completely decoupled from the target system, on any standard Java virtual machine, but the generated profiles represent the execution time metric of the target system. Our cross‐profiling framework is based on bytecode instrumentation. We identify several pointcuts in the execution of bytecode that need to be instrumented in order to estimate the CPU cycle consumption on the target system. An evaluation using the JOP embedded Java processor as target confirms that our approach reconciles high profile accuracy with moderate overhead. Our cross‐profiling framework also enables the performance evaluation of new processor architectures before they are implemented. As a case study, we explore the performance impact of various processor design choices and optimizations, such as different cache sizes or pipeline organizations, and come up with an improved processor design that yields speedups of up to 40% on standard Java benchmarks. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

11.
Accounting for the CPU consumption of applications is crucial for software development to detect and remove performance bottlenecks (profiling) and to evaluate the performance of algorithms (benchmarking). Moreover, extensible middleware may exploit resource consumption information in order to detect a resource overuse of client components (detection of denial-of-service attacks) or to charge clients for the resource consumption of their deployed components. The Java Virtual Machine (JVM) is a predominant target platform for application and middleware developers, but it currently lacks standard mechanisms for resource management.In this paper we present a tool, the Java Resource Accounting Framework, Second Edition (J-RAF2), which enables precise CPU management on standard Java runtime environments. J-RAF2 employs a platform-independent CPU consumption metric, the number of executed JVM bytecode instructions. We explain the advantages of this approach to CPU management and present five case studies that show the benefits in different settings.  相似文献   

12.
In this paper, we provide a theoretical foundation for and improvements to the existing bytecode verification technology, a critical component of the Java security model, for mobile code used with the Java “micro edition” (J2ME), which is intended for embedded computing devices. In Java, remotely loaded “bytecode” class files are required to be bytecode verified before execution, that is, to undergo a static type analysis that protects the platform's Java run-time system from so-called type confusion attacks such as pointer manipulation. The data flow analysis that performs the verification, however, is beyond the capacity of most embedded devices because of the memory requirements that the typical algorithm will need. We propose to take a proof-carrying code approach to data flow analysis in defining an alternative technique called “lightweight analysis” that uses the notion of a “certificate” to reanalyze a previously analyzed data flow problem, even on poorly resourced platforms. We formally prove that the technique provides the same guarantees as standard bytecode safety verification analysis, in particular that it is “tamper proof” in the sense that the guarantees provided by the analysis cannot be broken by crafting a “false” certificate or by altering the analyzed code. We show how the Java bytecode verifier fits into this framework for an important subset of the Java Virtual Machine; we also show how the resulting “lightweight bytecode verification” technique generalizes and simulates the J2ME verifier (to be expected as Sun's J2ME “K-Virtual machine” verifier was directly based on an early version of this work), as well as Leroy's “on-card bytecode verifier,” which is specifically targeted for Java Cards.  相似文献   

13.
Bytecode instrumentation is a widely used technique to implement aspect weaving and dynamic analyses in virtual machines such as the Java virtual machine. Aspect weavers and other instrumentations are usually developed independently and combining them often requires significant engineering effort, if at all possible. In this article, we present polymorphic bytecode instrumentation(PBI), a simple but effective technique that allows dynamic dispatch amongst several, possibly independent instrumentations. PBI enables complete bytecode coverage, that is, any method with a bytecode representation can be instrumented. We illustrate further benefits of PBI with three case studies. First, we describe how PBI can be used to implement a comprehensive profiler of inter‐procedural and intra‐procedural control flow. Second, we provide an implementation of execution levels for AspectJ, which avoids infinite regression and unwanted interference between aspects. Third, we present a framework for adaptive dynamic analysis, where the analysis to be performed can be changed at runtime by the user. We assess the overhead introduced by PBI and provide thorough performance evaluations of PBI in all three case studies. We show that pure Java profilers like JP2 can, thanks to PBI, produce accurate execution profiles by covering all code, including the core Java libraries. We then demonstrate that PBI‐based execution levels are much faster than control flow pointcuts to avoid interference between aspects and that their efficient integration in a practical aspect language is possible. Finally, we report that PBI enables adaptive dynamic analysis tools that are more reactive to user inputs than existing tools that rely on dynamic aspect‐oriented programming with runtime weaving. These experiments position PBI as a widely applicable and practical approach for combining bytecode instrumentations. © 2015 The Authors. Software: Practice and Experience Published by John Wiley & Sons Ltd.  相似文献   

14.
Program logics for bytecode languages such as Java bytecode or the .NET CIL can be used to apply Proof-Carrying Code concepts to bytecode programs and to verify correctness properties of bytecode programs. This paper presents a Hoare-style logic for a sequential bytecode kernel language similar to Java bytecode and CIL. The logic handles object-oriented features such as inheritance, dynamic method binding, and object structures with destructive updates, as well as unstructured control flow with jumps. It is sound and complete.  相似文献   

15.
This paper describes intra‐method control‐flow and data‐flow testing criteria for the Java bytecode language. Six testing criteria are considered for the generation of testing requirements: four control‐flow and two data‐flow based. The main reason to work at a lower level is that, even when there is no source code, structural testing requirements can still be derived and used to assess the quality of a given test set. It can be used, for instance, to perform structural testing on third‐party Java components. In addition, the bytecode can be seen as an intermediate language, so the analysis performed at this level can be mapped back to the original high‐level language that generated the bytecode. To support the application of the testing criteria, we have implemented a tool named JaBUTi (Java Bytecode Understanding and Testing). JaBUTi is used to illustrate the application of the ideas developed in this paper. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

16.
This article presents a type certifying compiler for a subset of Java and proves the type correctness of the bytecode it generates in the proof assistant Isabelle. The proof is performed by defining a type compiler that emits a type certificate and by showing a correspondence between bytecode and the certificate which entails well-typing. The basis for this work is an extensive formalization of the Java bytecode type system, which is first presented in an abstract, lattice-theoretic setting and then instantiated to Java types.  相似文献   

17.
Soot是一种用于优化、审查Java字节码的独立工具,也是一个在字节码上开发优化和变换的框架。为指导用户在Soot上快速开展工作,该文总结Soot的主要数据结构、流程及优化框架的组织管理,并通过实例剖析Soot的工作流程和实现机制,简述了Soot的扩展方法及已有应用。  相似文献   

18.
The accurate measurement of the execution time of Java bytecode is one factor that is important in order to estimate the total execution time of a Java application running on a Java Virtual Machine. In this paper we document the difficulties and solutions for the accurate timing of Java bytecode. We also identify trends across the execution times recorded for all imperative Java bytecodes. These trends would suggest that knowing the execution times of a small subset of the Java bytecode instructions would be sufficient to model the execution times of the remainder. We first review a statistical approach for achieving high precision timing results for Java bytecode using low precision timers and then present a more suitable technique using homogeneous bytecode sequences for recording such information. We finally compare instruction execution times acquired using this platform independent technique against execution times recorded using the read time stamp counter assembly instruction. In particular our results show the existence of a strong linear correlation between both techniques.  相似文献   

19.
The Java Virtual Machine executes bytecode programs that may have been sent from other, possibly untrusted, locations on the network. Since the transmitted code may be written by a malicious party or corrupted during network transmission, the Java Virtual Machine contains a bytecode verifier to check the code for type errors before it is run. As illustrated by reported attacks on Java run-time systems, the verifier is essential for system security. However, no formal specification of the bytecode verifier exists in the Java Virtual Machine Specification published by Sun. In this paper, we develop such a specification in the form of a type system for a subset of the bytecode language. The subset includes classes, interfaces, constructors, methods, exceptions, and bytecode subroutines. We also present a type checking algorithm and prototype bytecode verifier implementation, and we conclude by discussing other applications of this work. For example, we show how to extend our formal system to check other program properties, such as the correct use of object locks. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

20.
Many profilers for virtual execution environments, such as the Java virtual machine (JVM), are implemented with low‐level bytecode instrumentation techniques, which is tedious, error‐prone, and complicates maintenance and extension of the tools. In order to reduce the development time and cost, we promote building profilers for the JVM using high‐level aspect‐oriented programming (AOP). We show that the use of aspects yields concise profilers that are easy to develop, extend, and maintain, because low‐level instrumentation details are hidden from the tool developer. In order to build efficient profilers, we introduce inter‐advice communication, an extension to common AOP languages that enables efficient data passing between advices that are woven into the same method using local variables. We illustrate our approach with two case studies. First, we show that an existing, instrumentation‐based tool for listener latency profiling can be easily recast as an aspect. Second, we present an aspect for comprehensive calling context profiling. In order to reduce profiling overhead, our aspect parallelizes application execution and profile creation, resulting in a speedup of 110% on a machine with more than two cores, compared with a primitive, non‐parallel approach. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号