首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Object‐oriented component engineering is increasingly used for system development, partly because it emphasizes portability and reusability. Each time a component is used, it must be retested in the new environment. Unfortunately, the data abstraction that components usually use results in low testability. First, internal variables cannot be directly set. Second, even though a test input may trigger a fault, the failure does not propagate to the output. This paper presents a technique to increase object‐oriented component testability, thereby making it easier to detect faults. Components are often sealed so that source code is not available. The program analysis is performed at the Java component bytecode level. A component's bytecode is analysed to create a control and data flow graph, which is then used to increase component testability by increasing both controllability and observability. We have implemented this technique and applied it to several components. Experimental results reveal that fault detection can be increased by using our increasing testability process. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

2.
Fault links represent relationships between the types of code faults, or defects, and the types of components in which faults are detected. For example, our prior work validated that a fault link exists between Controller components and Control/Logic faults (such as unreachable code). Fault link information can guide code reviews, walkthroughs, testing, maintenance, and can advise fault seeding. In this paper, we use fault links to augment code reviews. Two experiments were undertaken to evaluate the usefulness of fault links, one with 26 Computer Science students and another with 24 software engineering professionals. The first experiment showed that fault link information assisted in finding more total defects and more ‘hard to detect’ defects, in the same amount of time, in a Java component of an online course management application. The experiment was repeated with professionals, adding a second Java component from the same application. For the second experiment, more total defects were found by the participants using fault link information for one of the two components and more hard to detect defects were found, in the same amount of time, in both Java components. The group using fault link information for code walkthroughs found, on average, 1.7–2 times more faults and 2–3 times more hard faults than the control group. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

3.
Component users need to customize components they obtain from providers, because providers usually develop components for general use. Although the customization is accomplished by modifying the interface of a component, faults from customization appear when the implementation part of a component and the interfaces interact. The implementation part is a black‐box, whose source code is not available to a component user, while the interface is a white‐box, whose source code is available for customization. Therefore, customization faults should be tested using both the black‐box part and the white‐box part of a component. This paper proposes a new technique to test customization faults using software fault injection and mutation testing, and the technique is tailored to Enterprise JavaBeans. Test cases are selected by injecting faults not into the entire interface but into specific parts of the component's interface. The specific parts that are chosen control the effectiveness of the test cases. An empirical study to evaluate the technique is reported. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

4.
We present a new approach for creating repositories of real software faults. We have developed a tool, the Automatic Fault IDentification Tool (AFID), that implements this approach. AFID records both a fault revealing test case and a faulty version of the source code for any crashing faults that the developer discovers and a fault correcting source code change for any crashing faults that the developer corrects. The test cases are a significant contribution, because they enable new research that explores the dynamic behaviors of the software faults. AFID uses an operating system level monitoring mechanism to monitor both the compilation and execution of the application. This technique makes it straightforward for AFID to support a wide range of programming languages and compilers.  相似文献   

5.
In this paper, we provide a theoretical foundation for and improvements to the existing bytecode verification technology, a critical component of the Java security model, for mobile code used with the Java “micro edition” (J2ME), which is intended for embedded computing devices. In Java, remotely loaded “bytecode” class files are required to be bytecode verified before execution, that is, to undergo a static type analysis that protects the platform's Java run-time system from so-called type confusion attacks such as pointer manipulation. The data flow analysis that performs the verification, however, is beyond the capacity of most embedded devices because of the memory requirements that the typical algorithm will need. We propose to take a proof-carrying code approach to data flow analysis in defining an alternative technique called “lightweight analysis” that uses the notion of a “certificate” to reanalyze a previously analyzed data flow problem, even on poorly resourced platforms. We formally prove that the technique provides the same guarantees as standard bytecode safety verification analysis, in particular that it is “tamper proof” in the sense that the guarantees provided by the analysis cannot be broken by crafting a “false” certificate or by altering the analyzed code. We show how the Java bytecode verifier fits into this framework for an important subset of the Java Virtual Machine; we also show how the resulting “lightweight bytecode verification” technique generalizes and simulates the J2ME verifier (to be expected as Sun's J2ME “K-Virtual machine” verifier was directly based on an early version of this work), as well as Leroy's “on-card bytecode verifier,” which is specifically targeted for Java Cards.  相似文献   

6.
针对云计算环境下的软件知识产权保护需求,提出一种基于非等价语义混淆理论的软件水印方案。方案设计了一种非等价语义混淆方法,用于切割、隐藏代码语义,同时利用混淆规则虚拟映射水印。切割后的语义放置在一个独立的模块中,验证水印信息、恢复程序正常执行。对该方案的反编译实验表明,其逆向工程难度大,保障了水印顽健性,可有效保护放置在云端的代码。  相似文献   

7.
Software repositories hold applications that are often categorized to improve the effectiveness of various maintenance tasks. Properly categorized applications allow stakeholders to identify requirements related to their applications and predict maintenance problems in software projects. Manual categorization is expensive, tedious, and laborious – this is why automatic categorization approaches are gaining widespread importance. Unfortunately, for different legal and organizational reasons, the applications’ source code is often not available, thus making it difficult to automatically categorize these applications. In this paper, we propose a novel approach in which we use Application Programming Interface (API) calls from third-party libraries for automatic categorization of software applications that use these API calls. Our approach is general since it enables different categorization algorithms to be applied to repositories that contain both source code and bytecode of applications, since API calls can be extracted from both the source code and byte-code. We compare our approach to a state-of-the-art approach that uses machine learning algorithms for software categorization, and conduct experiments on two large Java repositories: an open-source repository containing 3,286 projects and a closed-source repository with 745 applications, where the source code was not available. Our contribution is twofold: we propose a new approach that makes it possible to categorize software projects without any source code using a small number of API calls as attributes, and furthermore we carried out a comprehensive empirical evaluation of automatic categorization approaches.  相似文献   

8.
The increasing level of automation in critical infrastructures requires development of effective ways for finding faults in safety critical software components. Synchronization in concurrent components is especially prone to errors and, due to difficulty of exploring all thread interleavings, it is difficult to find synchronization faults. In this paper we present an experimental study demonstrating the effectiveness of model checking techniques in finding synchronization faults in safety critical software when they are combined with a design for verification approach. We based our experiments on an automated air traffic control software component called the Tactical Separation Assisted Flight Environment (TSAFE). We first reengineered TSAFE using the concurrency controller design pattern. The concurrency controller design pattern enables a modular verification strategy by decoupling the behaviors of the concurrency controllers from the behaviors of the threads that use them using interfaces specified as finite state machines. The behavior of a concurrency controller is verified with respect to arbitrary numbers of threads using the infinite state model checking techniques implemented in the Action Language Verifier (ALV). The threads which use the controller classes are checked for interface violations using the finite state model checking techniques implemented in the Java Path Finder (JPF). We present techniques for thread isolation which enables us to analyze each thread in the program separately during interface verification. We conducted two sets of experiments using these verification techniques. First, we created 40 faulty versions of TSAFE using manual fault seeding. During this exercise we also developed a classification of faults that can be found using the presented design for verification approach. Next, we generated another 100 faulty versions of TSAFE using randomly seeded faults that were created automatically based on this fault classification. We used both infinite and finite state verification techniques for finding the seeded faults. The results of our experiments demonstrate the effectiveness of the presented design for verification approach in eliminating synchronization faults.  相似文献   

9.
为了对Java虚拟机(JVM)进行测试,开发人员通常需要手工设计或利用测试生成工具生成复杂的测试程序,从而检测JVM中潜在的缺陷。然而,复杂的测试程序给开发人员定位及修复缺陷带来了极高的成本。测试程序约简技术旨在保障测试程序缺陷检测能力的同时,尽可能的删减测试程序中与缺陷检测无关的代码。现有研究工作基于Delta调试在C程序和XML输入上可以取得较好的约简效果,但是在JVM测试场景中,具有复杂语法和语义依赖关系的Java测试程序约减仍存在粒度较粗、约简效果较差的问题,导致约简后的程序理解成本依然很高。因此,针对具有复杂程序依赖关系的Java测试程序,本文提出一种基于程序约束的细粒度测试程序约简方法JavaPruner。首先在语句块级别设计细粒度的代码度量方法,随后在Delta调试技术上引入语句块之间的依赖约束关系来对测试程序进行约简。以Java字节码测试程序为实验对象,通过从现有的针对JVM测试的测试程序生成工具中筛选出具有复杂依赖关系的50个测试程序作为基准数据集,并在这些数据集上验证JavaPruner的有效性。实验结果表明,JavaPruner可以有效删减Java字节码测试程序中的冗余代码。与现有方法相比,在所有基准数据集上约减能力平均可提升37.7%。同时,JavaPruner可以在保障程序有效性及缺陷检测能力的同时将Java字节码测试程序最大约简至其原有大小的1.09% ,有效降低了测试程序的分析和理解成本。  相似文献   

10.
11.
Most of the existing fault localization approaches use execution coverage of test cases to isolate the suspicious codes that likely contain faults. Program slicing can extract the dependencies of program entities with respect to a specific criterion. Therefore this technique is expected to have a beneficial effect on fault localization. In this paper, we propose a novel approach using a hybrid spectrum of full slices and execution slices to improve the effectiveness of fault localization. In particular, our approach firstly computes full slices of failed test cases and execution slices of passed test cases respectively. Secondly it constructs the hybrid spectrum by intersecting full slices and execution slices. Finally it computes the suspiciousness of each statement in the hybrid slice spectrum and generates a fault location report with descending suspiciousness of each statement. We also implement our proposed approach in our prototype tool HSFal by Java programming language. To verify the effectiveness of our approach, we performed an empirical study by the prototype on several widely used open source programs. Our approach is compared with eight representative coverage-based and slice-based fault localization approaches. Final experimental results show that our proposed approach is more effective in fault localization than other compared approaches, and can reduce almost 2.98–31.79% of the average cost of examined code significantly.  相似文献   

12.
Cost analysis statically approximates the cost of programs in terms of their input data size. This paper presents, to the best of our knowledge, the first approach to the automatic cost analysis of object-oriented bytecode programs. In languages such as Java and C#, analyzing bytecode has a much wider application area than analyzing source code since the latter is often not available. Cost analysis in this context has to consider, among others, dynamic dispatch, jumps, the operand stack, and the heap. Our method takes a bytecode program and a cost model specifying the resource of interest, and generates cost relations which approximate the execution cost of the program with respect to such resource. We report on COSTA, an implementation for Java bytecode which can obtain upper bounds on cost for a large class of programs and complexity classes. Our basic techniques can be directly applied to infer cost relations for other object-oriented imperative languages, not necessarily in bytecode form.  相似文献   

13.
This paper describes intra‐method control‐flow and data‐flow testing criteria for the Java bytecode language. Six testing criteria are considered for the generation of testing requirements: four control‐flow and two data‐flow based. The main reason to work at a lower level is that, even when there is no source code, structural testing requirements can still be derived and used to assess the quality of a given test set. It can be used, for instance, to perform structural testing on third‐party Java components. In addition, the bytecode can be seen as an intermediate language, so the analysis performed at this level can be mapped back to the original high‐level language that generated the bytecode. To support the application of the testing criteria, we have implemented a tool named JaBUTi (Java Bytecode Understanding and Testing). JaBUTi is used to illustrate the application of the ideas developed in this paper. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

14.
The accurate measurement of the execution time of Java bytecode is one factor that is important in order to estimate the total execution time of a Java application running on a Java Virtual Machine. In this paper we document the difficulties and solutions for the accurate timing of Java bytecode. We also identify trends across the execution times recorded for all imperative Java bytecodes. These trends would suggest that knowing the execution times of a small subset of the Java bytecode instructions would be sufficient to model the execution times of the remainder. We first review a statistical approach for achieving high precision timing results for Java bytecode using low precision timers and then present a more suitable technique using homogeneous bytecode sequences for recording such information. We finally compare instruction execution times acquired using this platform independent technique against execution times recorded using the read time stamp counter assembly instruction. In particular our results show the existence of a strong linear correlation between both techniques.  相似文献   

15.
Static Single Assignment (SSA) form is often used as an intermediate representation during code optimization in Java Virtual Machines. Recently, SSA has successfully been used for bytecode verification. However, constructing SSA at the code consumer is costly. SSA-based mobile code transport formats have been shown to eliminate this cost by shifting SSA creation to the code producer. These new formats, however, are not backward compatible with the established Java class-file format. We propose a novel approach to transport SSA information implicitly through structural code properties of standard Java bytecode. While the resulting bytecode sequence can still be directly executed by traditional Virtual Machines, our novel VM can infer SSA form and confirm its safety with virtually no overhead.  相似文献   

16.
We present a runtime verification framework for Java programs. Properties can be specified in Linear-time Temporal Logic (LTL) over AspectJ pointcuts. These properties are checked during program-execution by an automaton-based approach where transitions are triggered through aspects. No Java source code is necessary since AspectJ works on the bytecode level, thus even allowing instrumentation of third-party applications. As an example, we discuss safety properties and lock-order reversal.  相似文献   

17.
Spectrum-based fault localization is amongst the most effective techniques for automatic fault localization. However, abstractions of program execution traces, one of the required inputs for this technique, require instrumentation of the software under test at a statement level of granularity in order to compute a list of potential faulty statements. This introduces a considerable overhead in the fault localization process, which can even become prohibitive in, e.g., resource constrained environments. To counter this problem, we propose a new approach, coined dynamic code coverage (DCC), aimed at reducing this instrumentation overhead. This technique, by means of using coarser instrumentation, starts by analyzing coverage traces for large components of the system under test. It then progressively increases the instrumentation detail for faulty components, until the statement level of detail is reached. To assess the validity of our proposed approach, an empirical evaluation was performed, injecting faults in six real-world software projects. The empirical evaluation demonstrates that the dynamic code coverage approach reduces the execution overhead that exists in spectrum-based fault localization, and even presents a more concise potential fault ranking to the user. We have observed execution time reductions of 27% on average and diagnostic report size reductions of 77% on average.  相似文献   

18.
Reasoning about Java bytecode (JBC) is complicated due to its unstructured control-flow, the use of three-address code combined with the use of an operand stack, etc. Therefore, many static analyzers and model checkers for JBC first convert the code into a higher-level representation. In contrast to traditional decompilation, such representation is often not Java source, but rather some intermediate language which is a good input for the subsequent phases of the tool. Interpretive decompilation consists in partially evaluating an interpreter for the compiled language (in this case JBC) written in a high-level language with respect to the code to be decompiled. There have been proofs-of-concept that interpretive decompilation is feasible, but there remain important open issues when it comes to decompile a real language such as JBC. This paper presents, to the best of our knowledge, the first modular scheme to enable interpretive decompilation of a realistic programming language to a high-level representation, namely of JBC to Prolog. We introduce two notions of optimality which together require that decompilation does not generate code more than once for each program point. We demonstrate the impact of our modular approach and optimality issues on a series of realistic benchmarks. Decompilation times and decompiled program sizes are linear with the size of the input bytecode program. This demonstrates empirically the scalability of modular decompilation of JBC by partial evaluation.  相似文献   

19.
In Java bytecode, intra-method subroutines are employed to represent code in “finally” blocks. The use of such polymorphic subroutines within a method makes bytecode analysis very difficult. Fortunately, such subroutines can be eliminated through recompilation or inlining. Inlining is the obvious choice since it does not require changing compilers or access to the source code. It also allows transformation of legacy bytecode. However, the combination of nested, non-contiguous subroutines with overlapping exception handlers poses a difficult challenge. This paper presents an algorithm that successfully solves all these problems without producing superfluous instructions. Furthermore, inlining can be combined with bytecode simplification, using abstract bytecode. We show how this abstration is extended to the full set of instructions and how it simplifies static and dynamic analysis.  相似文献   

20.
Prioritizing test cases for regression testing   总被引:1,自引:0,他引:1  
Test case prioritization techniques schedule test cases for execution in an order that attempts to increase their effectiveness at meeting some performance goal. Various goals are possible; one involves rate of fault detection, a measure of how quickly faults are detected within the testing process. An improved rate of fault detection during testing can provide faster feedback on the system under test and let software engineers begin correcting faults earlier than might otherwise be possible. One application of prioritization techniques involves regression testing, the retesting of software following modifications; in this context, prioritization techniques can take advantage of information gathered about the previous execution of test cases to obtain test case orderings. We describe several techniques for using test execution information to prioritize test cases for regression testing, including: 1) techniques that order test cases based on their total coverage of code components; 2) techniques that order test cases based on their coverage of code components not previously covered; and 3) techniques that order test cases based on their estimated ability to reveal faults in the code components that they cover. We report the results of several experiments in which we applied these techniques to various test suites for various programs and measured the rates of fault detection achieved by the prioritized test suites, comparing those rates to the rates achieved by untreated, randomly ordered, and optimally ordered suites  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号