首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Web software applications have become complex, sophisticated programs that are based on novel computing technologies. Their most essential characteristic is that they represent a different kind of software deployment—most of the software is never delivered to customers’ computers, but remains on servers, allowing customers to run the software across the web. Although powerful, this deployment model brings new challenges to developers and testers. Checking static HTML links is no longer sufficient; web applications must be evaluated as complex software products. This paper focuses on three aspects of web applications that are unique to this type of deployment: (1) an extremely loose form of coupling that features distributed integration, (2) the ability that users have to directly change the potential flow of execution, and (3) the dynamic creation of HTML forms. Taken together, these aspects allow the potential control flow to vary with each execution, thus the possible control flows cannot be determined statically, prohibiting several standard analysis techniques that are fundamental to many software engineering activities. This paper presents a new way to model web applications, based on software couplings that are new to web applications, dynamic flow of control, distributed integration, and partial dynamic web application development. This model is based on the notion of atomic sections, which allow analysis tools to build the analog of a control flow graph for web applications. The atomic section model has numerous applications in web applications; this paper applies the model to the problem of testing web applications.  相似文献   

2.
ContextThe web has had a significant impact on all aspects of our society. As our society relies more and more on the web, the dependability of web applications has become increasingly important. To make these applications more dependable, for the past decade researchers have proposed various techniques for testing web-based software applications. Our literature search for related studies retrieved 193 papers in the area of web application testing, which have appeared between 2000 and 2013.ObjectiveAs this research area matures and the number of related papers increases, it is important to systematically identify, analyze, and classify the publications and provide an overview of the trends and empirical evidence in this specialized field.MethodsWe systematically review the body of knowledge related to functional testing of web application through a systematic literature review (SLR) study. This SLR is a follow-up and complimentary study to a recent systematic mapping (SM) study that we conducted in this area. As part of this study, we pose three sets of research questions, define selection and exclusion criteria, and synthesize the empirical evidence in this area.ResultsOur pool of studies includes a set of 95 papers (from the 193 retrieved papers) published in the area of web application testing between 2000 and 2013. The data extracted during our SLR study is available through a publicly-accessible online repository. Among our results are the followings: (1) the list of test tools in this area and their capabilities, (2) the types of test models and fault models proposed in this domain, (3) the way the empirical studies in this area have been designed and reported, and (4) the state of empirical evidence and industrial relevance.ConclusionWe discuss the emerging trends in web application testing, and discuss the implications for researchers and practitioners in this area. The results of our SLR can help researchers to obtain an overview of existing web application testing approaches, fault models, tools, metrics and empirical evidence, and subsequently identify areas in the field that require more attention from the research community.  相似文献   

3.
ContextThe Web has had a significant impact on all aspects of our society. As our society relies more and more on the Web, the dependability of web applications has become increasingly important. To make these applications more dependable, for the past decade researchers have proposed various techniques for testing web-based software applications. Our literature search for related studies retrieved 147 papers in the area of web application testing, which have appeared between 2000 and 2011.ObjectiveAs this research area matures and the number of related papers increases, it is important to systematically identify, analyze, and classify the publications and provide an overview of the trends in this specialized field.MethodWe review and structure the body of knowledge related to web application testing through a systematic mapping (SM) study. As part of this study, we pose two sets of research questions, define selection and exclusion criteria, and systematically develop and refine a classification schema. In addition, we conduct a bibliometrics analysis of the papers included in our study.ResultsOur study includes a set of 79 papers (from the 147 retrieved papers) published in the area of web application testing between 2000 and 2011. We present the results of our systematic mapping study. Our mapping data is available through a publicly-accessible repository. We derive the observed trends, for instance, in terms of types of papers, sources of information to derive test cases, and types of evaluations used in papers. We also report the demographics and bibliometrics trends in this domain, including top-cited papers, active countries and researchers, and top venues in this research area.ConclusionWe discuss the emerging trends in web application testing, and discuss the implications for researchers and practitioners in this area. The results of our systematic mapping can help researchers to obtain an overview of existing web application testing approaches and indentify areas in the field that require more attention from the research community.  相似文献   

4.
ContextIn the era of globally-distributed software engineering, the practice of global software testing (GST) has witnessed increasing adoption. Although there have been ethnographic studies of the development aspects of global software engineering, there have been fewer studies of GST, which, to succeed, can require dealing with unique challenges.ObjectiveTo address this limitation of existing studies, we conducted, and in this paper, report the findings of, a study of a vendor organization involved in one kind of GST practice: outsourced, offshored software testing.MethodWe conducted an ethnographically-informed study of three vendor-side testing teams over a period of 2 months. We used methods, such as interviews and participant observations, to collect the data and the thematic-analysis approach to analyze the data.FindingsOur findings describe how the participant test engineers perceive software testing and deadline pressures, the challenges that they encounter, and the strategies that they use for coping with the challenges. The findings reveal several interesting insights. First, motivation and appreciation play an important role for our participants in ensuring that high-quality testing is performed. Second, intermediate onshore teams increase the degree of pressure experienced by the participant test engineers. Third, vendor team participants perceive productivity differently from their client teams, which results in unproductive-productivity experiences. Lastly, participants encounter quality-dilemma situations for various reasons.ConclusionThe study findings suggest the need for (1) appreciating test engineers’ efforts, (2) investigating the team structure’s influence on pressure and the GST practice, (3) understanding culture’s influence on other aspects of GST, and (4) identifying and addressing quality-dilemma situations.  相似文献   

5.
Comprehensive, automated software testing requires an oracle to check whether the output produced by a test case matches the expected behaviour of the programme. But the challenges in creating suitable oracles limit the ability to perform automated testing in some programmes, and especially in scientific software. Metamorphic testing is a method for automating the testing process for programmes without test oracles. This technique operates by checking whether the programme behaves according to properties called metamorphic relations. A metamorphic relation describes the change in output when the input is changed in a prescribed way. Unfortunately, finding the metamorphic relations satisfied by a programme or function remains a labour‐intensive task, which is generally performed by a domain expert or a programmer. In this work, we propose a machine learning approach for predicting metamorphic relations that uses a graph‐based representation of a programme to represent control flow and data dependency information. In earlier work, we found that simple features derived from such graphs provide good performance. An analysis of the features used in this earlier work led us to explore the effectiveness of several representations of those graphs using the machine learning framework of graph kernels, which provide various ways of measuring similarity between graphs. Our results show that a graph kernel that evaluates the contribution of all paths in the graph has the best accuracy and that control flow information is more useful than data dependency information. The data used in this study are available for download at http://www.cs.colostate.edu/saxs/MRpred/functions.tar.gz to help researchers in further development of metamorphic relation prediction methods. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
ContextEvidence-based software engineering (EBSE) provides a process for solving practical problems based on a rigorous research approach. The primary focus so far was on mapping and aggregating evidence through systematic reviews.ObjectivesWe extend existing work on evidence-based software engineering by using the EBSE process in an industrial case to help an organization to improve its automotive testing process. With this we contribute in (1) providing experiences on using evidence based processes to analyze a real world automotive test process and (2) provide evidence of challenges and related solutions for automotive software testing processes.MethodsIn this study we perform an in-depth investigation of an automotive test process using an extended EBSE process including case study research (gain an understanding of practical questions to define a research scope), systematic literature review (identify solutions through systematic literature), and value stream mapping (map out an improved automotive test process based on the current situation and improvement suggestions identified). These are followed by reflections on the EBSE process used.ResultsIn the first step of the EBSE process we identified 10 challenge areas with a total of 26 individual challenges. For 15 out of those 26 challenges our domain specific systematic literature review identified solutions. Based on the input from the challenges and the solutions, we created a value stream map of the current and future process.ConclusionsOverall, we found that the evidence-based process as presented in this study helps in technology transfer of research results to industry, but at the same time some challenges lie ahead (e.g. scoping systematic reviews to focus more on concrete industry problems, and understanding strategies of conducting EBSE with respect to effort and quality of the evidence).  相似文献   

7.
This article presents a metaheuristic algorithm for testing software, especially web applications, which can be modeled as a state transition diagram. We formulate the testing problem as an optimization problem and use a simulated annealing (SA) metaheuristic algorithm to generate test cases as sequences of events while keeping the test suite size reasonable. SA evolves a solution by minimizing an energy function that is based on testing objectives such as coverage, diversity, and continuity of events. The suggested method includes a “significance weight” assigned to events, which leads to important web pages and ensures coverage of relevant features by test cases. The experimental results demonstrate the effectiveness of simulated annealing and show that SA yields good results for testing web applications in comparison with other heuristics.  相似文献   

8.
ContextTest suite reduction is the problem of creating and executing a set of test cases that are smaller in size but equivalent in effectiveness to an original test suite. However, reduced suites can still be large and executing all the tests in a reduced test suite can be time consuming.ObjectiveWe propose ordering the tests in a reduced suite to increase its rate of fault detection. The ordered reduced test suite can be executed in time constrained situations, where, even if test execution is stopped early, the best test cases from the reduced suite will already be executed.MethodIn this paper, we present several approaches to order reduced test suites using experimentally verified prioritization criteria for the domain of web applications. We conduct an empirical study with three subject applications and user-session-based test cases to demonstrate how ordered reduced test suites often make a practical contribution. To enable comparison between test suites of different sizes, we develop Mod_APFD_C, a modification of the traditional prioritization effectiveness measure.ResultsWe find that by ordering the reduced suites, we create test suites that are more effective than unordered reduced suites. In each of our subject applications, there is at least one ordered reduced suite that outperforms the best unordered reduced suite and the best prioritized original suite.ConclusionsOur results show that when a tester does not have enough time to execute the entire reduced suite, executing an ordered reduced suite often improves the rate of fault detection. By coupling the underlying system’s characteristics with observations from our study on the criteria that produce the best ordered reduced suites, a tester can order their reduced test suites to obtain increased testing effectiveness.  相似文献   

9.
A service-oriented architecture (SOA) for web applications is often implemented using web services (WSs) and consists of different operations the executions of which are perceived as events. The order and time-appropriateness of occurrences of these events play a vital role for the proper functioning of a real-time SOA. This paper presents an event-based approach to modeling and testing of functional behavior of WSs by event sequence graphs (ESG). Nodes of an ESG represent events, e.g., “request” or “response”, and arcs give the sequence of these events. For representing parameter values, e.g., for time-out of operation calls, ESG are augmented by decision tables. A case study carried out on a commercial web system with SOA validates the approach and analyzes its characteristic issues. The novelty of the approach stems from (i) its simplicity and lucidity in representing complex real-time web applications based on WSs in SOA, and (ii) its modeling that considers also testing and thus enables a comfortable fault management leading to a holistic view.  相似文献   

10.
ContextGUI testing is system testing of a software that has a graphical-user interface (GUI) front-end. Because system testing entails that the entire software system, including the user interface, be tested as a whole, during GUI testing, test cases—modeled as sequences of user input events—are developed and executed on the software by exercising the GUI’s widgets (e.g., text boxes and clickable buttons). More than 230 articles have appeared in the area of GUI testing since 1991.ObjectiveIn this paper, we study this existing body of knowledge using a systematic mapping (SM).MethodThe SM is conducted using the guidelines proposed by Petersen et al. We pose three sets of research questions. We define selection and exclusion criteria. From the initial pool of 230 articles, published in years 1991–2011, our final pool consisted of 136 articles. We systematically develop a classification scheme and map the selected articles to this scheme.ResultsWe present two types of results. First, we report the demographics and bibliometrics trends in this domain, including: top-cited articles, active researchers, top venues, and active countries in this research area. Moreover, we derive the trends, for instance, in terms of types of articles, sources of information to derive test cases, types of evaluations used in articles, etc. Our second major result is a publicly-accessible repository that contains all our mapping data. We plan to update this repository on a regular basis, making it a “live” resource for all researchers.ConclusionOur SM provides an overview of existing GUI testing approaches and helps spot areas in the field that require more attention from the research community. For example, much work is needed to connect academic model-based techniques with commercially available tools. To this end, studies are needed to compare the state-of-the-art in GUI testing in academic techniques and industrial tools.  相似文献   

11.
ContextSoftware testing is a key aspect of software reliability and quality assurance in a context where software development constantly has to overcome mammoth challenges in a continuously changing environment. One of the characteristics of software testing is that it has a large intellectual capital component and can thus benefit from the use of the experience gained from past projects. Software testing can, then, potentially benefit from solutions provided by the knowledge management discipline. There are in fact a number of proposals concerning effective knowledge management related to several software engineering processes.ObjectiveWe defend the use of a lesson learned system for software testing. The reason is that such a system is an effective knowledge management resource enabling testers and managers to take advantage of the experience locked away in the brains of the testers. To do this, the experience has to be gathered, disseminated and reused.MethodAfter analyzing the proposals for managing software testing experience, significant weaknesses have been detected in the current systems of this type. The architectural model proposed here for lesson learned systems is designed to try to avoid these weaknesses. This model (i) defines the structure of the software testing lessons learned; (ii) sets up procedures for lesson learned management; and (iii) supports the design of software tools to manage the lessons learned.ResultsA different approach, based on the management of the lessons learned that software testing engineers gather from everyday experience, with two basic goals: usefulness and applicability.ConclusionThe architectural model proposed here lays the groundwork to overcome the obstacles to sharing and reusing experience gained in the software testing and test management. As such, it provides guidance for developing software testing lesson learned systems.  相似文献   

12.
张博刚  张威  陈月宁  廖飞雄 《计算机应用》2010,30(10):2749-2753
为提高GUI自动化测试的覆盖率、故障定位的速率和精度,以及检测由于时空变化导致的空间错误引起的故障,建立基于运行监测的GUI自动化测试模型。模型将GUI分为窗口框架层、界面元素层、功能结构层和运行记录层四层。窗口框架层描述GUI所有窗口,界面元素层描述用户输入,功能结构层提出功能覆盖准则,运行记录层通过插桩记录代码动态监测软件每一次执行时路径和各个窗口的运行状态,从而提高测试的覆盖率,并根据运行记录中窗口的总执行次数和正确的执行次数为可靠性计算提供依据。由于监测代码的运行情况,因此故障能够定位到代码级,提高故障定位的精度和速率。最后以记事本程序为例验证了模型的有效性。  相似文献   

13.
How to develop knowledge-based and expert systems today is becoming more and more well understood; how to test these systems still poses some challenges. There has been considerable progress in developing techniques for static testing of these systems, checking for problems via formal examination methods; but there has been almost no work on dynamic testing, testing the systems under operating conditions. A novel approach for the dynamic testing of expert system rule bases is presented. This approach, Heuristic Testing, is based on the idea of first testing systems for disastrous safety and integrity problems before testing for primary functions and other classes of problems, and a prioritized series of 10 classes of faults are identified. The Heuristic Testing approach is intended to assure software reliability rather than simply find defects; the reliability is based on the 10 fault clones called compotent reliability. General procedures for conceptualizing and generating test cases were developed for all fault classes, including a Generic Testing Method for generating key test-case values. One of the classes, error-metric, illustrates how complexity-metrics, now used for predicting conventional software problems, could be developed for expert system rule bases. Two key themes are automation (automatically generating test cases) and fix-as-you-go testing (fixing a problem before continuing to test). The overall approach may be generalizable to static rule base testing, to testing of other expert system components, to testing of other nonconventional systems such as neural network and object-oriented systems, and even to conventional software.  相似文献   

14.
ContextSoftware has become an innovative solution nowadays for many applications and methods in science and engineering. Ensuring the quality and correctness of software is challenging because each program has different configurations and input domains. To ensure the quality of software, all possible configurations and input combinations need to be evaluated against their expected outputs. However, this exhaustive test is impractical because of time and resource constraints due to the large domain of input and configurations. Thus, different sampling techniques have been used to sample these input domains and configurations.ObjectiveCombinatorial testing can be used to effectively detect faults in software-under-test. This technique uses combinatorial optimization concepts to systematically minimize the number of test cases by considering the combinations of inputs. This paper proposes a new strategy to generate combinatorial test suite by using Cuckoo Search concepts.MethodCuckoo Search is used in the design and implementation of a strategy to construct optimized combinatorial sets. The strategy consists of different algorithms for construction. These algorithms are combined to serve the Cuckoo Search.ResultsThe efficiency and performance of the new technique were proven through different experiment sets. The effectiveness of the strategy is assessed by applying the generated test suites on a real-world case study for the purpose of functional testing.ConclusionResults show that the generated test suites can detect faults effectively. In addition, the strategy also opens a new direction for the application of Cuckoo Search in the context of software engineering.  相似文献   

15.
ContextSoftware testing is a knowledge intensive process, and, thus, Knowledge Management (KM) principles and techniques should be applied to manage software testing knowledge.ObjectiveThis study conducts a survey on existing research on KM initiatives in software testing, in order to identify the state of the art in the area as well as the future research. Aspects such as purposes, types of knowledge, technologies and research type are investigated.MethodThe mapping study was performed by searching seven electronic databases. We considered studies published until December 2013. The initial resulting set was comprised of 562 studies. From this set, a total of 13 studies were selected. For these 13, we performed snowballing and direct search to publications of researchers and research groups that accomplished these studies.ResultsFrom the mapping study, we identified 15 studies addressing KM initiatives in software testing that have been reviewed in order to extract relevant information on a set of research questions.ConclusionsAlthough only a few studies were found that addressed KM initiatives in software testing, the mapping shows an increasing interest in the topic in the recent years. Reuse of test cases is the perspective that has received more attention. From the KM point of view, most of the studies discuss aspects related to providing automated support for managing testing knowledge by means of a KM system. Moreover, as a main conclusion, the results show that KM is pointed out as an important strategy for increasing test effectiveness, as well as for improving the selection and application of suited techniques, methods and test cases. On the other hand, inadequacy of existing KM systems appears as the most cited problem related to applying KM in software testing.  相似文献   

16.

Web applications are deployed on machines around the globe and offer almost universal accessibility. These applications assure functional interconnectivity between different components on a 24/7 basis. One of the most important requirements is data confidentiality and secure authentication. However, implementation flaws and unfulfilled requirements often result in security leaks that malicious users eventually exploited. In this context, the application of different testing methods is of utmost importance in order to detect software defects during development and to prevent unauthorized access in advance. In this paper, we contribute to test automation for web applications. In particular, we focus on using planning for testing where we introduce underlying models covering attacks and their use in testing of web applications. The planning model offers a high degree of extendibility and configurability and as well overcomes limits of traditional graphical representations. New testing possibilities emerge that eventually lead to better vulnerability detection, therefore ensuring more secure web services and applications.

  相似文献   

17.
18.
ContextQuality assurance effort, especially testing effort, is frequently a major cost factor during software development. Consequently, one major goal is often to reduce testing effort. One promising way to improve the effectiveness and efficiency of software quality assurance is the use of data from early defect detection activities to provide a software testing focus. Studies indicate that using a combination of early defect data and other product data to focus testing activities outperforms the use of other product data only. One of the key challenges is that the use of data from early defect detection activities (such as inspections) to focus testing requires a thorough understanding of the relationships between these early defect detection activities and testing. An aggravating factor is that these relationships are highly context-specific and need to be evaluated for concrete environments.ObjectiveThe underlying goal of this paper is to help companies get a better understanding of these relationships for their own environment, and to provide them with a methodology for finding relationships in their own environments.MethodThis article compares three different strategies for evaluating assumed relationships between inspections and testing. We compare a confidence counter, different quality classes, and the F-measure including precision and recall.ResultsOne result of this case-study-based comparison is that evaluations based on the aggregated F-measures are more suitable for industry environments than evaluations based on a confidence counter. Moreover, they provide more detailed insights about the validity of the relationships.ConclusionWe have confirmed that inspection results are suitable data for controlling testing activities. Evaluated knowledge about relationships between inspections and testing can be used in the integrated inspection and testing approach In2Test to focus testing activities. Product data can be used in addition. However, the assumptions have to be evaluated in each new context.  相似文献   

19.
20.
ContextComplexity measures provide us some information about software artifacts. A measure of the difficulty of testing a piece of code could be very useful to take control about the test phase.ObjectiveThe aim in this paper is the definition of a new measure of the difficulty for a computer to generate test cases, we call it Branch Coverage Expectation (BCE). We also analyze the most common complexity measures and the most important features of a program. With this analysis we are trying to discover whether there exists a relationship between them and the code coverage of an automatically generated test suite.MethodThe definition of this measure is based on a Markov model of the program. This model is used not only to compute the BCE, but also to provide an estimation of the number of test cases needed to reach a given coverage level in the program. In order to check our proposal, we perform a theoretical validation and we carry out an empirical validation study using 2600 test programs.ResultsThe results show that the previously existing measures are not so useful to estimate the difficulty of testing a program, because they are not highly correlated with the code coverage. Our proposed measure is much more correlated with the code coverage than the existing complexity measures.ConclusionThe high correlation of our measure with the code coverage suggests that the BCE measure is a very promising way of measuring the difficulty to automatically test a program. Our proposed measure is useful for predicting the behavior of an automatic test case generator.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号