首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
单元测试框架下的软件测试将产生大量的测试脚本, 在软件测试过程中如何有效利用现有的测试脚本, 实现软件测试脚本(代码)的重用成为业界关心的一个重要问题。业界最常见的复用需求是当开发项目更换新的测试框架时, 如何重用开发人员在原单元测试框架下积累的测试脚本。针对这一问题, 提出了基于测试脚本移植的重用方案。通过对单元测试脚本的分析和自动翻译方法, 将原测试脚本中包含的信息提取出来, 解析为基于XML的中间脚本, 然后再利用XSLT技术, 依据XML记录的信息, 自动生成目标框架的单元测试脚本, 从而解决单元测试脚本的重用问题。最后实验验证了方案的可行性。  相似文献   

2.
Source code comments are a valuable instrument to preserve design decisions and to communicate the intent of the code to programmers and maintainers. Nevertheless, commenting source code and keeping comments up-to-date is often neglected for reasons of time or programmers obliviousness. In this paper, we investigate the question whether developers comment their code and to what extent they add comments or adapt them when they evolve the code. We present an approach to associate comments with source code entities to track their co-evolution over multiple versions. A set of heuristics are used to decide whether a comment is associated with its preceding or its succeeding source code entity. We analyzed the co-evolution of code and comments in eight different open source and closed source software systems. We found with statistical significance that (1) the relative amount of comments and source code grows at about the same rate; (2) the type of a source code entity, such as a method declaration or an if-statement, has a significant influence on whether or not it gets commented; (3) in six out of the eight systems, code and comments co-evolve in 90% of the cases; and (4) surprisingly, API changes and comments do not co-evolve but they are re-documented in a later revision. As a result, our approach enables a quantitative assessment of the commenting process in a software system. We can, therefore, leverage the results to provide feedback during development to increase the awareness of when to add comments or when to adapt comments because of source code changes.  相似文献   

3.
Software architectures such as plug-in and service-oriented architectures enable developers to build extensible software products, whose functionality can be enriched by adding or configuring components. A well-known example of such an architecture is Eclipse, best known for its use to create a series of extensible IDEs. Although such architectures give users and developers a great deal of flexibility to create new products, the complexity of the built systems increases. In order to manage this complexity developers use extensive automated test suites. Unfortunately, current testing tools offer little insight in which of the many possible combinations of components and components configurations are actually tested. The goal of this paper is to remedy this problem. To that end, we interview 25 professional developers on the problems they experience in test suite understanding for plug-in architectures. The findings have been incorporated in five architectural views that provide an extensibility perspective on plug-in-based systems and their test suites. The views combine static and dynamic information on plug-in dependencies, extension initialization, extension and service usage, and the test suites. The views have been implemented in ETSE, the Eclipse Plug-in Test Suite Exploration tool. We evaluate the proposed views by analyzing eGit, Mylyn, and a Mylyn connector.  相似文献   

4.
In the current trend, Extreme Programing methodology is widely adopted by small and medium-sized projects for dealing with rapidly or indefinite changing requirements. Test-first strategy and code refactoring are the important practices of Extreme Programing for rapid development and quality support. The test-first strategy emphasizes that test cases are designed before system implementation to keep the correctness of artifacts during software development; whereas refactoring is the removal of “bad smell” code for improving quality without changing its semantics. However, the test-first strategy may conflict with code refactoring in the sense that the original test cases may be broken or inefficient for testing programs, which are revised by code refactoring. In general, the developers revise the test cases manually since it is not complicated. However, when the developers perform a pattern-based refactoring to improve the quality, the effort of revising the test cases is much more than that in simple code refactoring. In our observation, a pattern-based refactoring is composed of many simple and atomic code refactorings. If we have the composition relationship and the mapping rules between code refactoring and test case refactoring, we may infer a test case revision guideline in pattern-based refactoring. Based on this idea, in this research, we propose a four-phase approach to guide the construction of the test case refactoring for design patterns. We also introduce our approach by using some well-known design patterns and evaluate its feasibility by means of test coverage.  相似文献   

5.
6.
Our current understanding of how programmers perform feature location during software maintenance is based on controlled studies or interviews, which are inherently limited in size, scope and realism. Replicating controlled studies in the field can both explore the findings of these studies in wider contexts and study new factors that have not been previously encountered in the laboratory setting. In this paper, we report on a field study about how software developers perform feature location within source code during their daily development activities. Our study is based on two complementary field data sets: one that reflects complete IDE activity of 67 professional developers over approximately one month, and the other that reflects usage of an IR-based code search tool by nearly 600 developers. Analyzing this data, we report results on how often developers use which type of code search tools, on the types of queries and retreival strategies used by developers, and on patterns of developer feature location behavior following code search. The results of the study suggest that there is (1) a need for helping developers to devise better code search queries; (2) a lack of adoption of niche code search tools; (3) a need for code search tool to handle both lookup and exploratory queries; and (4) a need for better integration between code search, structured navigation, and debugging tools in feature location tasks.  相似文献   

7.
A new software development process called test-driven modeling applies the Extreme Programming test-driven paradigm in a model-driven development environment. (The basis of this article is a project in Motorola's iDEN division that is extending and migrating a large legacy telecommunication system to new platforms using TDM.) This process involves automatic testing through simulation and using executable models as living software system architecture documents. In TDM, we use the same message sequence charts (MSCs) for both system analysis (or design documents) and unit test cases. Similarly, we use the same high-level modeling diagrams for both automatic code generation and living software architecture documents to guide the system's detailed implementation in later phases. Practical results show that developers can effectively apply TDM to large projects with high productivity and quality in terms of the number of code defects.  相似文献   

8.
电力企业在智能电表的生产过程中发现制造商用于招标展示的样品表和竞标成功后大量投产的批量表存在显著差异。由于检测不足,许多投入实际使用的批量表出现工作状态异常、质量不合格的情况,对这些电表的维护造成了不必要的花费。针对此问题制定了一种智能电表软件功能检测方案,设计了一种嵌入式智能电表代码逆向模型。模型以分析智能电表核心程序从而获取系统运行特征为思路,以反汇编算法分析电表固件代码功能为手段,对嵌入式智能电表进行软件功能差异测试。模型包括固件代码提取、固件代码反汇编和软件功能比较三大模块,在反汇编模块中基于现有的线性扫描和递归遍历算法使用了一种改进的单步扫描算法(SDA)。实际应用时对智能电表批量产品和样品进行比较鉴别,对系统功能的差异测量效果明显;同时使用该模型在维护电力企业已使用电表时可控制拟投产电表与已使用电表功能和质量误差在±20%范围内。  相似文献   

9.
This paper presents an investigation into four distinct aspects of software complexity. An initial partition of the software complexity domain would be the attributes of static software complexity and those of dynamic software comlexity. Static complexity measurement views all program modules monolithically. That is, all of the code for all of the modules is measured as extracted from source code files. When computer software is actually executed, not all modules are executed to the same extent. Some receive a large proportion of execution activity. Further, when these modules execute, not all code in the modules executes. If just the code that is executed is measured for complexity a completely different view of the program module emerges. In this investigation we will examine the static complexity of a program together with the three dynamic measures of functional, fractional, and operational complexity. The eminent value of the dynamic metrics is shown in their role as measures of test outcomes.  相似文献   

10.
Unit testing plays a major role in the software development process. What started as an ad hoc approach is becoming a common practice among developers. It enables the immediate detection of bugs introduced into a unit whenever code changes occur. Hence, unit tests provide a safety net of regression tests and validation tests which encourage developers to refactor existing code with greater confidence. One of the major corner stones of the agile development approach is unit testing. Agile methods require all software classes to have unit tests that can be executed by an automated unit-testing framework. However, not all software systems have unit tests. When changes to such software are needed, writing unit tests from scratch, which is hard and tedious, might not be cost effective. In this paper we propose a technique which automatically generates unit tests for software that does not have such tests. We have implemented GenUTest, a prototype tool which captures and logs interobject interactions occurring during the execution of Java programs, using the aspect-oriented language AspectJ. These interactions are used to generate JUnit tests. They also serve in generating mock aspects—mock object-like entities, which enable testing units in isolation. The generated JUnit tests and mock aspects are independent of the tool, and can be used by developers to perform unit tests on the software. Comprehensiveness of the unit tests depends on the software execution. We applied GenUTest to several open source projects such as NanoXML and JODE. We present the results, explain the limitations of the tool, and point out direction to future work to improve the code coverage provided by GenUTest and its scalability.  相似文献   

11.
The evolution of highly configurable systems is known to be a challenging task. Thorough understanding of configuration options their relationships, and their implementation in various types of artefacts (variability model, mapping, and implementation) is required to avoid compilation errors, invalid products, or dead code. Recent studies focusing on co-evolution of artefacts detailed feature-oriented change scenarios, describing how related artefacts might change over time. However, relying on manual analysis of commits, such work do not provide the means to obtain quantitative information on the frequency of described scenarios nor information on the exhaustiveness of the presented scenarios for the evolution of a large scale system. In this work, we propose FEVER and its instantiation for the Linux kernel. FEVER extracts detailed information on changes in variability models (KConfig files), assets (preprocessor based C code), and mappings (Makefiles). We apply this methodology to the Linux kernel and build a dataset comprised of 15 releases of the kernel history. We performed an evaluation of the FEVER approach by manually inspecting the data and compared it with commits in the system’s history. The evaluation shows that FEVER accurately captures feature related changes for more than 85% of the 810 manually inspected commits. We use the collected data to reflect on occurrences of co-evolution in practice. Our analysis shows that complex co-evolution scenarios occur in every studied release but are not among the most frequent change scenarios, as they only occur for 8 to 13% of the evolving features. Moreover, only a minority of developers working on a given release will make changes to all artefacts related to a feature (between 10% and 13% of authors). While our conclusions are derived from observations on the evolution of the Linux kernel, we believe that they may have implications for tool developers as well as guide further research in the field of co-evolution of artefacts.  相似文献   

12.

Software testing is key for quality assurance of embedded systems. However, with increased development pace, the amount of test results data risks growing to a level where exploration and visualization of the results are unmanageable. This paper covers a tool, Tim, implemented at a company developing embedded systems, where software development occurs in parallel branches and nightly testing is partitioned over software branches, test systems and test cases. Tim aims to replace a previous solution with problems of scalability, requirements and technological flora. Tim was implemented with a reference group over several months. For validation, data were collected both from reference group meetings and logs from the usage of the tool. Data were analyzed quantitatively and qualitatively. The main contributions from the study include the implementation of eight views for test results exploration and visualization, the identification of four solutions patterns for these views (filtering, aggregation, previews and comparisons), as well as six challenges frequently discussed at reference group meetings (expectations, anomalies, navigation, integrations, hardware details and plots). Results are put in perspective with related work and future work is proposed, e.g., enhanced anomaly detection and integrations with more systems such as risk management, source code and requirements repositories.

  相似文献   

13.

In software maintenance, after modifying the software a system needs regression testing. Execution of regression testing confirms that any modified code has no adverse effect as well as does not introduce new faults in the existing functionality of the software. When working with object-oriented programming code-based testing is generally expensive. In this study, we proposed a technique for regression testing using unified modeling language (UML) diagrams and code-based analysis for object-oriented software. In this research work, the design and code based technique with an evolutionary approach are presented to select the best possible test cases from the test suite. We used the dependency graph for intermediate representation for the objectoriented program to identify the change. The selection of test cases is done at the design level using the UML model. The models are compared to identify the change between these two models. The proposed approached maximizes the value of APFD.

  相似文献   

14.
Much of software developers' time is spent understanding unfamiliar code. To better understand how developers gain this understanding and how software development environments might be involved, a study was performed in which developers were given an unfamiliar program and asked to work on two debugging tasks and three enhancement tasks for 70 minutes. The study found that developers interleaved three activities. They began by searching for relevant code both manually and using search tools; however, they based their searches on limited and misrepresentative cues in the code, environment, and executing program, often leading to failed searches. When developers found relevant code, they followed its incoming and outgoing dependencies, often returning to it and navigating its other dependencies; while doing so, however, Eclipse's navigational tools caused significant overhead. Developers collected code and other information that they believed would be necessary to edit, duplicate, or otherwise refer to later by encoding it in the interactive state of Eclipse's package explorer, file tabs, and scroll bars. However, developers lost track of relevant code as these interfaces were used for other tasks, and developers were forced to find it again. These issues caused developers to spend, on average, 35 percent of their time performing the mechanics of navigation within and between source files. These observations suggest a new model of program understanding grounded in theories of information foraging and suggest ideas for tools that help developers seek, relate, and collect information in a more effective and explicit manner  相似文献   

15.
Software defects due to coding errors continue to plague the industry with disastrous impact, especially in the enterprise application software category. Identifying how much of these defects are specifically due to coding errors is a challenging problem. In this paper, we investigate the best methods for preventing new coding defects in enterprise resource planning (ERP) software, and discovering and fixing existing coding defects. A large-scale survey-based ex-post-facto study coupled with experiments involving static code analysis tools on both sample code and real-life million lines of code open-source ERP software were conducted for such purpose. The survey-based methodology consisted of respondents who had experience developing ERP software. This research sought to determine if software defects could be merely mitigated or totally eliminated, and what supporting policies, procedures and infrastructure were needed to remedy the problem. In this paper, we introduce a hypothetical framework developed to address our research questions, the hypotheses we have conjectured, the research methodology we have used, and the data analysis methods used to validate the stated hypotheses. Our study revealed that: (a) the best way for ERP developers to discover coding-error based defects in existing programs is to choose an appropriate programming language; perform a combination of manual and automated code auditing, static code analysis, and formal test case design, execution and analysis, (b) the most effective ways to mitigate defects in an ERP system is to track the defect densities in the ERP software, fix the defects found, perform regression testing, and update the resulting defect density statistics, and (c) the impact of epistemological and legal commitments on the defect densities of ERP systems is inconclusive.We feel that our proposed model has the potential to vastly improve the quality of ERP and other similar software by reducing the coding-error defects, and recommend that future research aimed at testing the model in actual production environments.  相似文献   

16.
Test suites are a valuable source of up-to-date documentation as developers continuously modify them to reflect changes in the production code and preserve an effective regression suite. While maintaining traceability links between unit test and the classes under test can be useful to selectively retest code after a change, the value of having traceability links goes far beyond this potential savings. One key use is to help developers better comprehend the dependencies between tests and classes and help maintain consistency during refactoring. Despite its importance, test-to-code traceability is not common in software development and, when needed, traceability information has to be recovered during software development and evolution. We propose an advanced approach, named SCOTCH+ (Source code and COncept based Test to Code traceability Hunter), to support the developer during the identification of links between unit tests and tested classes. Given a test class, represented by a JUnit class, the approach first exploits dynamic slicing to identify a set of candidate tested classes. Then, external and internal textual information associated with the classes retrieved by slicing is analyzed to refine this set of classes and identify the final set of candidate tested classes. The external information is derived from the analysis of the class name, while internal information is derived from identifiers and comments. The approach is evaluated on five software systems. The results indicate that the accuracy of the proposed approach far exceeds the leading techniques found in the literature.  相似文献   

17.
Finding software faults is a problematic activity in many systems. Existing approaches usually work close to the system implementation and require developers to perform different code analyses. Although these approaches are effective, the amount of information to be managed by developers is often overwhelming. This problem calls for complementary approaches able to work at higher levels of abstraction than code, helping developers to keep intellectual control over the system when analyzing faults. In this context, we present an expert‐system approach, called FLABot, which assists developers in fault‐localization tasks by reasoning about faults using software architecture models. We have evaluated a prototype of FLABot in two medium‐size case studies, involving novice and non‐novice developers. We compared time consumed, code browsed and faults found by these developers, with and without the support of FLABot, observing interesting effort reductions when applying FLABot. The results and lessons learned have shown that our approach is practical and reduces the efforts for finding individual faults.  相似文献   

18.
ContextContinuous Integration (CI) has become an established best practice of modern software development. Its philosophy of regularly integrating the changes of individual developers with the master code base saves the entire development team from descending into Integration Hell, a term coined in the field of extreme programming. In practice, CI is supported by automated tools to cope with this repeated integration of source code through automated builds and testing. One of the main problems, however, is that relevant information about the quality and health of a software system is both scattered across those tools and across multiple views.ObjectiveThis paper introduces a quality awareness framework for CI-data and its conceptional model used for the data integration and visualization. The framework called SQA-Mashup makes use of the service-based mashup paradigm and integrates information from the entire CI-toolchain into a single service.MethodThe research approach followed in our work consists out of (i) a conceptional model for data integration and visualization, (ii) a prototypical framework implementation based on tool requirements derived from literature, and (iii) a controlled user study to evaluate its usefulness.ResultsThe results of the controlled user study showed that SQA-Mashup’s single point of access allows users to answer questions regarding the state of a system more quickly (57%) and accurately (21.6%) than with standalone CI-tools.ConclusionsThe SQA-Mashup framework can serve as one-stop shop for software quality data monitoring in a software development project. It enables easy access to CI-data which otherwise is not integrated but scattered across multiple CI-tools. Our dynamic visualization approach allows for a tailoring of integrated CI-data according to information needs of different stakeholders such as developers or testers.  相似文献   

19.
Software development is rarely an individual effort and generally involves teams of developers collaborating to generate good reliable code. Among the software code there exist technical dependencies that arise from software components using services from other components. The different ways of assigning the design, development, and testing of these software modules to people can cause various coordination problems among them. We claim that the collaboration of the developers, designers and testers must be related to and governed by the technical task structure. These collaboration practices are handled in what we call Socio-Technical Patterns.

The TESNA project (Technical Social Network Analysis) we report on in this paper addresses this issue. We propose a method and a tool that a project manager can use in order to detect the socio-technical coordination problems. We test the method and tool in a case study of a small and innovative software product company.  相似文献   

20.
Context: Component-based software engineering is aimed at managing the complexity of large-scale software development by composing systems from reusable parts. To understand or validate the behavior of such a system, one needs to understand the components involved in combination with understanding how they are configured and composed. This becomes increasingly difficult when components are implemented in various programming languages, and composition is specified in external artifacts. Moreover, tooling that supports in-depth system-wide analysis of such heterogeneous systems is lacking.Objective: This paper contributes a method to analyze and visualize information flow in a component-based system at various levels of abstraction. These visualizations are designed to support the comprehension needs of both safety domain experts and software developers for, respectively, certification and evolution of safety-critical cyber-physical systems.Method: We build system-wide dependence graphs and use static program slicing to determine all possible end-to-end information flows through and across a system’s components. We define a hierarchy of five abstractions over these information flows that reduce visual distraction and cognitive overload, while satisfying the users’ information needs. We improve on our earlier work to provide interconnected views that support both systematic, as well as opportunistic navigation scenarios.Results: We discuss the design and implementation of our approach and the resulting views in a prototype tool called FlowTracker. We summarize the results of a qualitative evaluation study, carried out via two rounds of interview, on the effectiveness and usability of these views. We discuss a number of improvements, such as more selective information presentations, that resulted from the evaluation.Conclusion: The evaluation shows that the proposed approach and views are useful for understanding and validating heterogeneous component-based systems, and address information needs that could earlier only be met by manual inspection of the source code. We discuss lessons learned and directions for future work.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号