首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
Lifecycle models divide the test process into consecutive test levels that are considered independently. This strict separation obstructs the view on the test process as a whole and fails to reflect the commonalities across test levels. Multi-level testing is an emerging approach that addresses the challenge of integrating test levels, putting particular emphasis on embedded systems. In this paper, we introduce a test level integration strategy based on reuse that is called bottom-up reuse. In addition, we present a test level instrument that seamlessly supports this strategy: multi-level test cases. We also provide a case study that reflects the positive results we have obtained in practice so far and demonstrates the feasibility of our test level integration approach. Bottom-up reuse and multi-level test cases lead to testing earlier on in the development process while reducing the effort required by test specification, test design, and test implementation.  相似文献   

2.
Automated test case selection for a new product in a product line is challenging due to several reasons. First, the variability within the product line needs to be captured in a systematic way; second, the reusable test cases from the repository are required to be identified for testing a new product. The objective of such automated process is to reduce the overall effort for selection (e.g., selection time), while achieving an acceptable level of the coverage of testing functionalities. In this paper, we propose a systematic and automated methodology using a feature model for testing (FM_T) to capture commonalities and variabilities of a product line and a component family model for testing (CFM_T) to capture the overall structure of test cases in the repository. With our methodology, a test engineer does not need to manually go through the repository to select a relevant set of test cases for a new product. Instead, a test engineer only needs to select a set of relevant features using FM_T at a higher level of abstraction for a product and a set of relevant test cases will be selected automatically. We evaluated our methodology via three different ways: (1) We applied our methodology to a product line of video conferencing systems called Saturn developed by Cisco, and the results show that our methodology can reduce the selection effort significantly; (2) we conducted a questionnaire-based study to solicit the views of test engineers who were involved in developing FM_T and CFM_T. The results show that test engineers are positive about adapting our methodology and models (FM_T and CFM_T) in their current practice; (3) we conducted a controlled experiment with 20 graduate students to assess the performance (i.e., cost, effectiveness and efficiency) of our automated methodology as compared to the manual approach. The results showed that our methodology is cost-effective as compared to the manual approach, and at the same time, its efficiency is not affected by the increased complexity of products.  相似文献   

3.
When trying to introduce TTCN-3 and thus improving test automation we are regularly faced with the issue of legacy test automation. These existing approaches to test automation range from small code developed for a very specific task to large applications with substantial development effort or even purchased solution from third party vendors that might include hardware components. Often these solutions cater for a specific need and their combined application is controlled by humans. A higher degree of test automation can be achieved if these solutions are combined in a test system to enable batch runs for example. TTCN-3 can be a good choice for implementing a future-oriented solution. However, recreating all of the existing parts in TTCN-3 might not be a feasible solution from a business perspective. Instead of refraining from creating an advanced solution, the idea is to use those existing parts to substantially reduce the effort necessary to introduce an improved test automation solution. In this paper, we will report on the successful integration of legacy test solutions using TTCN-3 and creating a new, advanced test system that enables overnight test runs. The paper will illustrate our challenges and experiences from this project.  相似文献   

4.
Software testing is both a time and resource-consuming activity in software development. The most difficult parts of software testing are the generation and prioritization of test data. Principally these two parts are performed manually. Hence introducing an automation approach will significantly reduce the total cost incurred in the software development lifecycle. A number of automatic test case generation (ATCG) and prioritization approaches have been explored. In this paper, we propose two approaches: (1) a pathspecific approach for ATCG using the following metaheuristic techniques: the genetic algorithm (GA), particle swarm optimization (PSO) and artificial bee colony optimization (ABC); and (2) a test case prioritization (TCP) approach using PSO. Based on our experimental findings, we conclude that ABC outperforms the GA and PSO-based approaches for ATC.G Moreover, the results for PSO on TCP arguments demonstrate biased applicability for both small and large test suites against random, reverse and unordered prioritization schemes. Therefore, we focus on conducting a comprehensive and exhaustive study of the application of metaheuristic algorithms in solving ATCG and TCP problems in software engineering.  相似文献   

5.

Robotic process automation is a disruptive technology to automate already digital yet manual tasks and subprocesses as well as whole business processes rapidly. In contrast to other process automation technologies, robotic process automation is lightweight and only accesses the presentation layer of IT systems to mimic human behavior. Due to the novelty of robotic process automation and the varying approaches when implementing the technology, there are reports that up to 50% of robotic process automation projects fail. To tackle this issue, we use a design science research approach to develop a framework for the implementation of robotic process automation projects. We analyzed 35 reports on real-life projects to derive a preliminary sequential model. Then, we performed multiple expert interviews and workshops to validate and refine our model. The result is a framework with variable stages that offers guidelines with enough flexibility to be applicable in complex and heterogeneous corporate environments as well as for small and medium-sized companies. It is structured by the three phases of initialization, implementation, and scaling. They comprise eleven stages relevant during a project and as a continuous cycle spanning individual projects. Together they structure how to manage knowledge and support processes for the execution of robotic process automation implementation projects.

  相似文献   

6.
Test case prioritization involves scheduling test cases in an order that increases the effectiveness in achieving some performance goals. One of the most important performance goals is the rate of fault detection. Test cases should run in an order that increases the possibility of fault detection and also that detects the most severe faults at the earliest in its testing life cycle. In this paper, we propose to put forth a model for system level test case prioritization (TCP) from software requirement specification to improve user satisfaction with quality software that can also be cost effective and to improve the rate of severe fault detection. The proposed model prioritizes the system test cases based on the six factors: customer priority, changes in requirement, implementation complexity, completeness, traceability and fault impact. The proposed prioritization technique is validated with two different validation techniques and is experimented in three phases with student projects and two sets of industrial projects and the results show convincingly that the proposed prioritization technique improves the rate of severe fault detection.  相似文献   

7.
ContextSoftware development time has been reduced with new development tools and paradigms, testing must accompany these changes. In order to release software products in a timely manner as well as to minimise the impact of possible errors introduced during maintenance interventions, testing automation has become a central goal. Whilst research has produced significant results in test case generation and tools for test case (re)-execution, one of the most important open problems in testing is the automation of oracle generation. The oracle decides whether the program under test has or has not behaved correctly and then issues a pass/fail verdict. In most cases, writing the oracle is a time-consuming activity that, moreover, is manual in most cases.ObjectiveThis article automates two important steps in the test oracle: obtention of expected output and its comparison with the actual output, using a model-driven approach.MethodThe oracle automation problem is resolved using a model-driven framework, based on OMG standards: UML is used as metamodel and QVT and MOF2Text as transformation languages. The automated testing framework takes the models that describe the system as input, using UML notation and derives from them the test model and then the test code, following a model-driven approach. Test oracle procedures are obtained from a UML state machine.ResultsA complete executable test case at functional test level is obtained, composed of a test procedure with parametrized input test data and expected result automation.ConclusionThe oracle automation is obtained using a model-driven approach, test cases are obtained automatically from UML models. The model-driven testing framework was applied to an industrial application and has been useful to testing automation for the main functionalities in the system.  相似文献   

8.

Context

Testing process is a time-consuming, expensive, and labor-intensive activity in any software setting including aspect-oriented programming (AOP). To reduce the testing costs, human effort, and to achieve the improvements in both quality and productivity of AOP, it is desirable to automate testing of aspect-oriented programs as much as possible.

Objective

In recent past years, a lot of research effort has been devoted to testing aspect-oriented programs but less effort has been dedicated to the automated AOP testing. This denotes that the current research on automated AOP testing is not sufficient and is still in a stage of infancy. In order to advance the state of the research in this area and to provide testers of AOP-based projects with a comparison basis, a detailed evaluation of the current automated AOP testing approaches in a thorough and experimental manner is required. Thus, the objective of this paper is to provide such evaluation of the current approaches.

Method

In this paper, we carry out an empirical study based on mutation analysis to examine four (namely Wrasp, Aspectra, Raspect, and EAT) existing automated AOP testing approaches, particularly their underlying test input generation and selection strategies, with regard to fault detection effectiveness. In addition, the approaches are compared in terms of required effort in detecting faults as part of efficiency evaluation.

Results

The experimental results and comparison provided insights into the effectiveness and efficiency of automated AOP testing with their respective strengths and weaknesses. Results showed that EAT is more effective than the other automated AOP testing approaches but not significant for all approaches. EAT was found to be significantly better than Wrasp at 95% confidence level (i.e. p < 0.05), but not significantly better than Aspectra or Raspect. Concerning the test effort efficiency, Wrasp was significantly (p < 0.05) efficient with requiring the lowest amount of test effort compared to the other approaches. Whereas, EAT showed to be not very efficient by recording the highest amount of test effort.

Conclusion

This implies that EAT can currently be the most effective automated AOP testing approach but perhaps less efficient. More generally, search-based testing (as underlying strategy of EAT approach) might achieve better effectiveness but at the cost of greater test effort compared to random testing (as underlying strategy of other approaches).  相似文献   

9.
Developing high-quality requirements specifications often demands a thoughtful analysis and an adequate level of expertise from analysts. Although requirements modeling techniques provide mechanisms for abstraction and clarity, fostering the reuse of shared functionality (e.g., via UML relationships for use cases), they are seldom employed in practice. A particular quality problem of textual requirements, such as use cases, is that of having duplicate pieces of functionality scattered across the specifications. Duplicate functionality can sometimes improve readability for end users, but hinders development-related tasks such as effort estimation, feature prioritization, and maintenance, among others. Unfortunately, inspecting textual requirements by hand in order to deal with redundant functionality can be an arduous, time-consuming, and error-prone activity for analysts. In this context, we introduce a novel approach called ReqAligner that aids analysts to spot signs of duplication in use cases in an automated fashion. To do so, ReqAligner combines several text processing techniques, such as a use case-aware classifier and a customized algorithm for sequence alignment. Essentially, the classifier converts the use cases into an abstract representation that consists of sequences of semantic actions, and then these sequences are compared pairwise in order to identify action matches, which become possible duplications. We have applied our technique to five real-world specifications, achieving promising results and identifying many sources of duplication in the use cases.  相似文献   

10.
ContextQuality assurance effort, especially testing effort, is often a major cost factor during software development, which sometimes consumes more than 50% of the overall development effort. Consequently, one major goal is often to reduce testing effort.ObjectiveThe main goal of the systematic mapping study is the identification of existing approaches that are able to reduce testing effort. Therefore, an overview should be presented both for researchers and practitioners in order to identify, on the one hand, future research directions and, on the other hand, potential for improvements in practical environments.MethodTwo researchers performed a systematic mapping study, focusing on four databases with an initial result set of 4020 articles.ResultsIn total, we selected and categorized 144 articles. Five different areas were identified that exploit different ways to reduce testing effort: approaches that predict defect-prone parts or defect content, automation, test input reduction approaches, quality assurance techniques applied before testing, and test strategy approaches.ConclusionThe results reflect an increased interest in this topic in recent years. A lot of different approaches have been developed, refined, and evaluated in different environments. The highest attention was found with respect to automation and prediction approaches. In addition, some input reduction approaches were found. However, in terms of combining early quality assurance activities with testing to reduce test effort, only a small number of approaches were found. Due to the continuous challenge of reducing test effort, future research in this area is expected.  相似文献   

11.
Software defects often lead to bugs, runtime errors and software maintenance difficulties. They should be systematically prevented, found, removed or fixed all along the software lifecycle. However, detecting and fixing these defects is still, to some extent, a difficult, time-consuming and manual process. In this paper, we propose a two-step automated approach to detect and then to correct various types of maintainability defects in source code. Using Genetic Programming, our approach allows automatic generation of rules to detect defects, thus relieving the designer from a fastidious manual rule definition task. Then, we correct the detected defects while minimizing the correction effort. A correction solution is defined as the combination of refactoring operations that should maximize as much as possible the number of corrected defects with minimal code modification effort. We use the Non-dominated Sorting Genetic Algorithm (NSGA-II) to find the best compromise. For six open source projects, we succeeded in detecting the majority of known defects, and the proposed corrections fixed most of them with minimal effort.  相似文献   

12.
Manual software testing is a widely practiced verification and validation method that is unlikely to fade away despite the advances in test automation. In the domain of manual testing, many practitioners advocate exploratory testing (ET), i.e., creative, experience-based testing without predesigned test cases, and they claim that it is more efficient than testing with detailed test cases. This paper reports a replicated experiment comparing effectiveness, efficiency, and perceived differences between ET and test-case-based testing (TCT) using 51 students as subjects, who performed manual functional testing on the jEdit text editor. Our results confirm the findings of the original study: 1) there is no difference in the defect detection effectiveness between ET and TCT, 2) ET is more efficient by requiring less design effort, and 3) TCT produces more false-positive defect reports than ET. Based on the small differences in the experimental design, we also put forward a hypothesis that the effectiveness of the TCT approach would suffer more than ET from time pressure. We also found that both approaches had distinctive issues: in TCT, the problems were related to correct abstraction levels of test cases, and the problems in ET were related to test design and logging of the test execution and results. Finally, we recognize that TCT has other benefits over ET in managing and controlling testing in large organizations.  相似文献   

13.
ContextTesting and debugging consume a significant portion of software development effort. Both processes are usually conducted independently despite their close relationship with each other. Test adequacy is vital for developers to assure that sufficient testing effort has been made, while finding all the faults in a program as soon as possible is equally important. A tight integration between testing and debugging activities is essential.ObjectiveThe paper aims at finding whether three factors, namely, the adequacy criterion to gauge a test suite, the size of a prioritized test suite, and the percentage of such a test suite used in fault localization, have significant impacts on integrating test case prioritization techniques with statistical fault localization techniques.MethodWe conduct a controlled experiment to investigate the effectiveness of applying adequate test suites to locate faults in a benchmark suite of seven Siemens programs and four real-life UNIX utility programs using three adequacy criteria, 16 test case prioritization techniques, and four statistical fault localization techniques. We measure the proportion of code needed to be examined in order to locate a fault as the effectiveness of statistical fault localization techniques. We also investigate the integration of test case prioritization and statistical fault localization with postmortem analysis.ResultThe main result shows that on average, it is more effective for a statistical fault localization technique to utilize the execution results of a MC/DC-adequate test suite than those of a branch-adequate test suite, and is in turn more effective to utilize the execution results of a branch-adequate test suite than those of a statement-adequate test suite. On the other hand, we find that none of the fault localization techniques studied can be sufficiently effective in suggesting fault-relevant statements that can fit easily into one debug window of a typical IDE.ConclusionWe find that the adequacy criterion and the percentage of a prioritized test suite utilized are major factors affecting the effectiveness of statistical fault localization techniques. In our experiment, the adoption of a stronger adequacy criterion can lead to more effective integration of testing and debugging.  相似文献   

14.
Manual testing was taking 38 man weeks for every release on every platform. With more hardware platforms coming onto the market and increasing commercial pressure to ship more products sooner, the ability to survive by manual testing alone came into question. Effort was expended on automating integration testing as a trial. The results were good but not as spectacular as had been hoped for. Nonetheless, automated system test was investigated whilst retaining the same test coverage achieved by manual testing. Many changes were made, and many problems encountered with the non-technical problems having an unexpectedly large impact – some of them almost destroyed the attempt at automation. Test automation has been successful, although the benefits are only now becoming apparent. More products are being released in less time than had ever been achieved before.  相似文献   

15.
New methods and techniques are needed to reduce the very costly integration and test effort (in terms of lead time, costs, resources) in the development of high-tech multi-disciplinary systems. To facilitate this effort reduction, we propose a method called model-based integration. This method allows to integrate formal executable models of system components that are not yet physically realized with available realizations of other components. The combination of models and realizations is then used for early analysis of the integrated system by means of validation, verification, and testing. This analysis enables early detection and prevention of problems that would otherwise occur during real integration, resulting in a significant reduction of effort invested in the the real integration and test phases. This paper illustrates how models of components, developed for model-based integration, can be used for automated model-based testing, which allows time-efficient determination of the conformance of component realizations with respect to their requirements. The combination of model-based integration and model-based testing is practically illustrated in a realistic industrial case study. Results obtained from this study encourage further research on model-based integration as a prominent method to reduce the integration and test effort.  相似文献   

16.
A significant body of prior work has devised approaches for automating the functional testing of interactive applications. However, little work exists for automatically testing their performance. Performance testing imposes additional requirements upon GUI test automation tools: the tools have to be able to replay complex interactive sessions, and they have to avoid perturbing the application’s performance. We study the feasibility of using five Java GUI capture and replay tools for GUI performance test automation. Besides confirming the severity of the previously known GUI element identification problem, we also describe a related problem, the temporal synchronization problem, which is of increasing importance for GUI applications that use timer-driven activity. We find that most of the tools we study have severe limitations when used for recording and replaying realistic sessions of real-world Java applications and that all of them suffer from the temporal synchronization problem. However, we find that the most reliable tool, Pounder, causes only limited perturbation and thus can be used to automate performance testing. Based on an investigation of Pounder’s approach, we further improve its robustness and reduce its perturbation. Finally, we demonstrate in a set of case studies that the conclusions about perceptible performance drawn from manual tests still hold when using automated tests driven by Pounder. Besides the significance of our findings to GUI performance testing, the results are also relevant to capture and replay-based functional GUI test automation approaches.  相似文献   

17.
A variety of modeling approaches, including model-driven development, consider model reuse as one of their cornerstones, but lack support for model reuse. This may be due to the available model repositories that barely exceed support for enhanced versioning or collaborative work and disregard model evolution. We believe that current model evolution approaches do not consider reuse sufficiently and that model repositories for reuse purposes should act as model libraries. This requires new functionality, because models for reuse need to achieve and maintain high quality. Moreover, quality assessment and assurance, which are tasks often considered tedious, need to be as simple as putting away or maintaining artifacts for reuse. In this study, we propose an approach for model evolution in UML model libraries that differs from general model evolution, since it is aimless and triggered by new external requirements. Our approach is founded on graphs that are partitioned into three stages with respect to the level of reusability. Each stage is defined by quality characteristics that are manifestations of a quality model consisting of four essential quality dimensions: syntactic, semantic, pragmatic, and emotional. In order to achieve the next level of reusability, i.e., change the stage of a model, a quality gate needs to be passed. This can be supported by a proactive approach that guides the modeler through the enhancement process and offers additional recommendations based on the level of reusability. Since guidance cannot be fully automated, we implement a review mechanism founded on the idea of the six thinking hats to help maintain focus on the main aspects of a review. Finally, our approach is enhanced to support the evolution of generations, i.e., a group of several model snapshots, to ease reusability.  相似文献   

18.
Regression testing is a testing activity that is performed to provide confidence that changes do not harm the existing behaviour of the software. Test suites tend to grow in size as software evolves, often making it too costly to execute entire test suites. A number of different approaches have been studied to maximize the value of the accrued test suite: minimization, selection and prioritization. Test suite minimization seeks to eliminate redundant test cases in order to reduce the number of tests to run. Test case selection seeks to identify the test cases that are relevant to some set of recent changes. Test case prioritization seeks to order test cases in such a way that early fault detection is maximized. This paper surveys each area of minimization, selection and prioritization technique and discusses open problems and potential directions for future research. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

19.
针对目前软件版本频繁升级,测试周期不断压缩,测试工作量大的问题,结合一款企业级应用软件,基于QTP技术平台,搭建了自动化测试框架.首先,通过了解QTP的工作原理,结合企业级应用软件特点,设计了一款合适的自动化测试框架;然后,通过设计测试用例,编写脚本,执行脚本等过程实现了自动化测试.实践表明当自动化测试执行次数越多,自动化测试耗时基本是手工测试耗时的15%,即自动化测试更适合用于回归测试中.通过自动化测试框架的使用,解决了在短时间内完成大量测试用例覆盖的问题,保证了发布软件的质量,提升了测试效率.  相似文献   

20.
持续集成环境下,软件快速更新加快了回归测试执行的频率,但缺陷快速反馈的需求对回归测试又提出了更高要求。测试用例优先排序技术研究测试用例的重要性,通常将缺陷探测能力强的测试用例优先执行,使其提早发现软件缺陷,其可解决持续集成环境下的快速反馈需求。缺陷预测技术可通过被测系统代码特征和历史缺陷来预估信息预测软件在新版本中发现缺陷的可能性,传统基于聚类的测试用例优先排序方法大多未考虑不同类簇数和特征子集对聚类结果的影响。文中将缺陷预测应用到聚类优先排序方法,构建测试用例和代码关联矩阵,对测试用例进行聚类分析,结合缺陷预测结果和最大最小距离策略指导簇间和簇内排序。通过实验验证发现,类簇数和聚类特征子集选择对排序效果有一定影响,当未能获取最佳类簇数和特征子集时,相比单一的聚类优先排序方法,所提方法可更有效地提高回归测试效率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号