首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Context

In order to ensure high quality of a process model repository, refactoring operations can be applied to correct anti-patterns, such as overlap of process models, inconsistent labeling of activities and overly complex models. However, if a process model collection is created and maintained by different people over a longer period of time, manual detection of such refactoring opportunities becomes difficult, simply due to the number of processes in the repository. Consequently, there is a need for techniques to detect refactoring opportunities automatically.

Objective

This paper proposes a technique for automatically detecting refactoring opportunities.

Method

We developed the technique based on metrics that can be used to measure the consistency of activity labels as well as the extent to which processes overlap and the type of overlap that they have. We evaluated it, by applying it to two large process model repositories.

Results

The evaluation shows that the technique can be used to pinpoint the approximate location of three types of refactoring opportunities with high precision and recall and of one type of refactoring opportunity with high recall, but low precision.

Conclusion

We conclude that the technique presented in this paper can be used in practice to automatically detect a number of anti-patterns that can be corrected by refactoring.  相似文献   

2.

Context

Boolean expressions are a central aspect of specifications and programs, but they also offer dangerously many ways to introduce faults. To counter this effect, various criteria to generate and evaluate tests have been proposed. These are traditionally based on the structure of the expressions, but are not directly related to the possible faults. Often, they also require expressions to be in particular formats such as disjunctive normal form (DNF), where a strict hierarchy of faults is available to prove fault detection capability.

Objective

This paper describes a method that generates test cases directly from an expression’s possible faults, guaranteeing that faults of any chosen class will be detected. In contrast to many previous criteria, this approach does not require the Boolean expressions to be in DNF, but allows expressions in any format, using any deliberate fault classes.

Method

The presented approach is based on creating test objectives for individual faults, such that efficient, modern satisfiability solvers can be used to derive test cases that directly address the faults. Although the number of such test objectives can be high depending on the considered fault classes, a number of optimizations can be applied to reduce the test generation effort.

Results

Evaluation on a set of commonly used benchmarks shows that despite guaranteeing fault coverage, the number of test cases can be reduced even further than that produced by other state of the art strategies. At the same time, the fault detection capability is not affected negatively, and clearly improves over state of the art criteria for general form Boolean expressions.

Conclusion

The approach presented in this paper is shown to improve over the state of the art with respect to the types of expressions that can be handled, the fault classes that are guaranteed to be covered, and the sizes of test suites generated automatically. This has implications for several fields of software testing: A main application is specification based testing, but Boolean expressions also exist in normal source code and need to be tested there as well.  相似文献   

3.
Fault-based test suite prioritization for specification-based testing   总被引:1,自引:0,他引:1  

Context

Existing test suite prioritization techniques usually rely on code coverage information or historical execution data that serve as indicators for estimating the fault-detecting ability of test cases. Such indicators are primarily empirical in nature and not theoretically driven; hence, they do not necessarily provide sound estimates. Also, these techniques are not applicable when the source code is not available or when the software is tested for the first time.

Objective

We propose and develop the novel notion of fault-based prioritization of test cases which directly utilizes the theoretical knowledge of their fault-detecting ability and the relationships among the test cases and the faults in the prescribed fault model, based on which the test cases are generated.

Method

We demonstrate our approach of fault-based prioritization by applying it to the testing of the implementation of logical expressions against their specifications. We then validate our proposal by an empirical study that evaluates the effectiveness of prioritization techniques using two different metrics.

Results

A theoretically guided fault-based prioritization technique generally outperforms other techniques under study, as assessed by two different metrics. Our empirical results also show that the technique helps to reveal all target faults by executing only about 72% of the prioritized test suite, thereby reducing the effort required in testing.

Conclusions

The fault-based prioritization approach is not only applicable to the instance empirically validated in this paper, but should also be adaptable to other fault-based testing strategies. We also envisage new research directions to be opened up by our work.  相似文献   

4.

Context

Software developers spend considerable effort implementing auxiliary functionality used by the main features of a system (e.g., compressing/decompressing files, encryption/decription of data, scaling/rotating images). With the increasing amount of open source code available on the Internet, time and effort can be saved by reusing these utilities through informal practices of code search and reuse. However, when this type of reuse is performed in an ad hoc manner, it can be tedious and error-prone: code results have to be manually inspected and integrated into the workspace.

Objective

In this paper we introduce and evaluate the use of test cases as an interface for automating code search and reuse. We call our approach Test-Driven Code Search (TDCS). Test cases serve two purposes: (1) they define the behavior of the desired functionality to be searched; and (2) they test the matching results for suitability in the local context. We also describe CodeGenie, an Eclipse plugin we have developed that performs TDCS using a code search engine called Sourcerer.

Method

Our evaluation consists of two studies: an applicability study with 34 different features that were searched using CodeGenie; and a performance study comparing CodeGenie, Google Code Search, and a manual approach.

Results

Both studies present evidence of the applicability and good performance of TDCS in the reuse of auxiliary functionality.

Conclusion

This paper presents an approach to source code search and its application to the reuse of auxiliary functionality. Our exploratory evaluation shows promising results, which motivates the use and further investigation of TDCS.  相似文献   

5.

Context

Testing a module that has memory using the black-box approach has been found to be expensive and relatively ineffective. Instead, testing without knowledge of the specifications (white-box approach) may not be effective in showing whether a program has been properly implemented as stated in its specifications. We propose instead a grey-box approach called Module Documentation-based Testing or MD-Test, the heart of which is an automatic generation of the test oracle from the external and internal views of the module.

Objective

This paper presents an empirical analysis and comparison of MD-Test against three existing testing tools.

Method

The experiment was conducted using a mutation-testing approach, in two phases that assess the capability of MD-Test in general and its capability of evaluating test results in particular.

Results

The results of the general assessment indicate that MD-Test is more effective than the other three tools under comparison, where it is able to detect all faults. The second phase of the experiment, which is significant to this study, compares the capabilities of MD-Test and JUnit-black using the test evaluation results. Likewise, an analysis of the test evaluation results shows that MD-Test is more effective and efficient, where MD-Test is able to detect at least the same number of faults as, or is at par with, the black-box approach.

Conclusion

It is concluded that test evaluation using grey-box approach is more effective and efficient that the black-box approach when testing a module that has memory.  相似文献   

6.

Context

In software development, Testing is an important mechanism both to identify defects and assure that completed products work as specified. This is a common practice in single-system development, and continues to hold in Software Product Lines (SPL). Even though extensive research has been done in the SPL Testing field, it is necessary to assess the current state of research and practice, in order to provide practitioners with evidence that enable fostering its further development.

Objective

This paper focuses on Testing in SPL and has the following goals: investigate state-of-the-art testing practices, synthesize available evidence, and identify gaps between required techniques and existing approaches, available in the literature.

Method

A systematic mapping study was conducted with a set of nine research questions, in which 120 studies, dated from 1993 to 2009, were evaluated.

Results

Although several aspects regarding testing have been covered by single-system development approaches, many cannot be directly applied in the SPL context due to specific issues. In addition, particular aspects regarding SPL are not covered by the existing SPL approaches, and when the aspects are covered, the literature just gives brief overviews. This scenario indicates that additional investigation, empirical and practical, should be performed.

Conclusion

The results can help to understand the needs in SPL Testing, by identifying points that still require additional investigation, since important aspects regarding particular points of software product lines have not been addressed yet.  相似文献   

7.
8.
9.

Context

A particular strength of agile systems development approaches is that they encourage a move away from ‘introverted’ development, involving the customer in all areas of development, leading to more innovative and hence more valuable information system. However, a move toward open innovation requires a focus that goes beyond a single customer representative, involving a broader range of stakeholders, both inside and outside the organisation in a continuous, systematic way.

Objective

This paper provides an in-depth discussion of the applicability and implications of open innovation in an agile environment.

Method

We draw on two illustrative cases from industry.

Results

We highlight some distinct problems that arose when two project teams tried to combine agile and open innovation principles. For example, openness is often compromised by a perceived competitive element and lack of transparency between business units. In addition, minimal documentation often reduce effective knowledge transfer while the use of short iterations, stand-up meetings and presence of on-site customer reduce the amount of time for sharing ideas outside the team.

Conclusion

A clear understanding of the inter- and intra-organisational applicability and implications of open innovation in agile systems development is required to address key challenges for research and practice.  相似文献   

10.

Context

The field of mutation analysis has been growing, both in the number of published papers and the number of active researchers. This special issue provides a sampling of recent advances and ideas. But do all the new researchers know where we started?

Objective

To imagine where we are going, we must first know where we are. To understand where we are, we must know where we have been. This paper reviews past mutation analysis research, considers the present, then imagines possible future directions.

Method

A retrospective study of past trends lets us the ability to see the current state of mutation research in a clear context, allowing us to imagine and then create future vectors.

Results

The value of mutation has greatly expanded since the early view of mutation as an expensive way to unit test subroutines. Our understanding of what mutation is and how it can help has become much deeper and broader.

Conclusion

Mutation analysis has been around for 35 years, but we are just now beginning to see its full potential. The papers in this issue and future mutation workshops will eventually allow us to realize this potential.  相似文献   

11.

Context

Service Oriented Architectures (SOA) have emerged as a new paradigm to develop interoperable and highly dynamic applications.

Objective

This paper aims to identify the state of the art in the research on testing in Service Oriented Architectures with dynamic binding.

Method

A mapping study has been performed employing both manual and automatic search in journals, conference/workshop proceedings and electronic databases.

Results

A total of 33 studies have been reviewed in order to extract relevant information regarding a previously defined set of research questions. The detection of faults and the decision making based on the information gathered from the tests have been identified as the main objectives of these studies. To achieve these goals, monitoring and test case generation are the most proposed techniques testing both functional and non-functional properties. Furthermore, different stakeholders have been identified as participants in the tests, which are performed in specific points in time during the life cycle of the services. Finally, it has been observed that a relevant group of studies have not validated their approach yet.

Conclusions

Although we have only found 33 studies that address the testing of SOA where the discovery and binding of the services are performed at runtime, this number can be considered significant due to the specific nature of the reviewed topic. The results of this study have contributed to provide a body of knowledge that allows identifying current gaps in improving the quality of the dynamic binding in SOA using testing approaches.  相似文献   

12.

Context

Component identification, the process of evolving legacy system into finely organized component-based software systems, is a critical part of software reengineering. Currently, many component identification approaches have been developed based on agglomerative hierarchical clustering algorithms. However, there is a lack of thorough investigation on which algorithm is appropriate for component identification.

Objective

This paper focuses on analyzing agglomerative hierarchical clustering algorithms in software reengineering, and then identifying their respective strengths and weaknesses in order to apply them effectively for future practical applications.

Method

A series of experiments were conducted for 18 clustering strategies combined according to various similarity measures, weighting schemes and linkage methods. Eleven subject systems with different application domains and source code sizes were used in the experiments. The component identification results are evaluated by the proposed size, coupling and cohesion criteria.

Results

The experimental results suggested that the employed similarity measures, weighting schemes and linkage methods can have various effects on component identification results with respect to the proposed size, coupling and cohesion criteria, so the hierarchical clustering algorithms produced quite different clustering results.

Conclusions

According to the experimental results, it can be concluded that it is difficult to produce perfectly satisfactory results for a given clustering algorithm. Nevertheless, these algorithms demonstrated varied capabilities to identify components with respect to the proposed size, coupling and cohesion criteria.  相似文献   

13.
Towards an ontology-based retrieval of UML Class Diagrams   总被引:1,自引:0,他引:1  

Context

Software Reuse has always been an important area amongst software companies in order to increase their productivity and the quality of their products, but code reuse is not the only answer for this. Nowadays, reuse techniques proposals include software designs or even software specifications. Therefore, this research focuses on software design, specifically on UML Class Diagrams. A semantic technology has been applied to facilitate the retrieval process for an effective reuse.

Objective

This research proposes an ontology-based retrieval technique by semantic similarity in order to support effective retrieval process for UML Class Diagrams. Since UML Class Diagrams are a de facto standard in the design stages of a Software Development Process, a good technique is needed to reuse them, i.e. reusing during the design stage instead of just the coding stages.

Method

An application ontology modeled using UML specifications was designed to compare UML Class Diagram element types. To measure their similarity, a survey was conducted amongst UML experts. Query expansion was improved by a domain ontology supporting the retrieval phase. The calculus of minimal distances in ontologies was solved using a shortest path algorithm.

Results

The case study shows the domain ontology importance in the UML Class Diagram retrieval process as well as the importance of an element type expansion method, such as an application ontology. A correlation between the query complexity and retrieved elements has been identified, by analyzing results. Finally, a positive Return of Investment (ROI) was estimated using Poulin’s Model.

Conclusion

Because Software Reuse has not to be limited to the coding stage, approaches to reuse design stage must be developed, i.e. UML Class Diagrams reuse. This approach proposes a technique for UML Class Diagrams retrieval, which is one important step towards reuse. Semantic technology combined with information retrieval improves the retrieval results.  相似文献   

14.
Evolutionary mutation testing   总被引:1,自引:0,他引:1  

Context

Mutation testing is a testing technique that has been applied successfully to several programming languages. However, it is often regarded as computationally expensive, so several refinements have been proposed to reduce its cost. Moreover, WS-BPEL compositions are being widely adopted by developers, but present new challenges for testing, since they can take much longer to run than traditional programs of the same size. Therefore, it is interesting to reduce the number of mutants required.

Objective

We present Evolutionary Mutation Testing (EMT), a novel mutant reduction technique for finding mutants that help derive new test cases that improve the quality of the initial test suite. It uses evolutionary algorithms to reduce the number of mutants that are generated and executed with respect to the exhaustive execution of all possible mutants, keeping as many difficult to kill and potentially equivalent mutants (strong mutants) as possible in the reduced set.

Method

To evaluate EMT we have developed GAmera, a mutation testing system powered by a co-evolutive genetic algorithm. We have applied this system to three WS-BPEL compositions to estimate its effectiveness, comparing it with random selection.

Results

The results obtained experimentally show that EMT can select all strong mutants generating 15% less mutants than random selection in over 20% less time for complex compositions. When generating a percentage of all mutants, EMT finds on average more strong mutants than random selection. This has been confirmed to be statistically significant within a 99.9% confidence interval.

Conclusions

EMT has reduced for the three tested compositions the number of mutants required to select those which are useful to derive new test cases that improve the quality of the test suite. The directed search performed by EMT makes it more effective than random selection, especially as compositions become more complex and the search space widens.  相似文献   

15.

Context

Although agile software development methods such as SCRUM and DSDM are gaining popularity, the consequences of applying agile principles to software product management have received little attention until now.

Objective

In this paper, this gap is filled by the introduction of a method for the application of SCRUM principles to software product management.

Method

A case study research approach is employed to describe and evaluate this method.

Results

This has resulted in the ‘agile requirements refinery’, an extension to the SCRUM process that enables product managers to cope with complex requirements in an agile development environment. A case study is presented to illustrate how agile methods can be applied to software product management.

Conclusions

The experiences of the case study company are provided as a set of lessons learned that will help others to apply agile principles to their software product management process.  相似文献   

16.

Context

Data warehouse conceptual design is based on the metaphor of the cube, which can be derived from either requirement-driven or data-driven methodologies. Each methodology has its own advantages. The first allows designers to obtain a conceptual schema very close to the user needs but it may be not supported by the effective data availability. On the contrary, the second ensures a perfect traceability and consistence with the data sources—in fact, it guarantees the presence of data to be used in analytical processing—but does not preserve from missing business user needs. To face this issue, the necessity emerged in the last years to define hybrid methodologies for conceptual design.

Objective

The objective of the paper is to use a hybrid methodology based on different multidimensional models in order to gather all advantages of each of them.

Method

The proposed methodology integrates the requirement-driven strategy with the data-driven one, in that order, possibly performing alterations of functional dependencies on UML multidimensional schemas reconciled with data sources.

Results

As case study, we illustrate how our methodology can be applied to the university environment. Furthermore, we evaluate quantitatively the benefits of this methodology by comparing it with some popular and conventional methodologies.

Conclusion

In conclusion, we highlight how the hybrid methodology improves the conceptual schema quality. Finally, we outline our present work devoted to introduce automatic design techniques in the methodology on the basis of the logical programming.  相似文献   

17.

Context

Assessing software quality at the early stages of the design and development process is very difficult since most of the software quality characteristics are not directly measurable. Nonetheless, they can be derived from other measurable attributes. For this purpose, software quality prediction models have been extensively used. However, building accurate prediction models is hard due to the lack of data in the domain of software engineering. As a result, the prediction models built on one data set show a significant deterioration of their accuracy when they are used to classify new, unseen data.

Objective

The objective of this paper is to present an approach that optimizes the accuracy of software quality predictive models when used to classify new data.

Method

This paper presents an adaptive approach that takes already built predictive models and adapts them (one at a time) to new data. We use an ant colony optimization algorithm in the adaptation process. The approach is validated on stability of classes in object-oriented software systems and can easily be used for any other software quality characteristic. It can also be easily extended to work with software quality predictive problems involving more than two classification labels.

Results

Results show that our approach out-performs the machine learning algorithm C4.5 as well as random guessing. It also preserves the expressiveness of the models which provide not only the classification label but also guidelines to attain it.

Conclusion

Our approach is an adaptive one that can be seen as taking predictive models that have already been built from common domain data and adapting them to context-specific data. This is suitable for the domain of software quality since the data is very scarce and hence predictive models built from one data set is hard to generalize and reuse on new data.  相似文献   

18.

Context

A feature model (FM) represents the valid combinations of features in a domain. The automated extraction of information from FMs is a complex task that involves numerous analysis operations, techniques and tools. Current testing methods in this context are manual and rely on the ability of the tester to decide whether the output of an analysis is correct. However, this is acknowledged to be time-consuming, error-prone and in most cases infeasible due to the combinatorial complexity of the analyses, this is known as the oracle problem.

Objective

In this paper, we propose using metamorphic testing to automate the generation of test data for feature model analysis tools overcoming the oracle problem. An automated test data generator is presented and evaluated to show the feasibility of our approach.

Method

We present a set of relations (so-called metamorphic relations) between input FMs and the set of products they represent. Based on these relations and given a FM and its known set of products, a set of neighbouring FMs together with their corresponding set of products are automatically generated and used for testing multiple analyses. Complex FMs representing millions of products can be efficiently created by applying this process iteratively.

Results

Our evaluation results using mutation testing and real faults reveal that most faults can be automatically detected within a few seconds. Two defects were found in FaMa and another two in SPLOT, two real tools for the automated analysis of feature models. Also, we show how our generator outperforms a related manual suite for the automated analysis of feature models and how this suite can be used to guide the automated generation of test cases obtaining important gains in efficiency.

Conclusion

Our results show that the application of metamorphic testing in the domain of automated analysis of feature models is efficient and effective in detecting most faults in a few seconds without the need for a human oracle.  相似文献   

19.

Context

Model-driven approaches deal with the provision of models, transformations between them and code generators to address software development. This approach has the advantage of defining a conceptual structure, where the models used by business managers and analysts can be mapped into more detailed models used by software developers. This alignment between high-level business specifications and the lower-level information technologies (ITs) models is crucial to the field of service-oriented development, where meaningful business services and process specifications are those relevant to real business scenarios.

Objective

This paper presents a model-driven approach which, starting from high-level computational-independent business models (CIMs) - the business view - sets out guidelines for obtaining lower-level platform-independent behavioural models (PIMs) - the information system view. A key advantage of our approach is the use of real high-level business models, not just requirements models, which, by means of model transformations, helps software developers to make the most of the business knowledge for specifying and developing business services.

Method

This proposal is framed in a method for service-oriented development of information systems whose main characteristic is the use of services as first-class objects. The method follows an MDA-based approach, proposing a set of models at different levels of abstraction and model transformations to connect them.

Results

The paper present the complete set of CIM and PIM metamodels and the specification of the mappings between them, which clear advantage is the support for the alignment between high-level business view and ITs. The proposed model-driven process is being implemented in an MDA tool. A first prototype has been used to develop a travel agency case study that illustrates the proposal.

Conclusion

This study shows how a model-driven approach helps to solve the alignment problem between the business view and the information system view that arises when adopting service-oriented approaches for software development.  相似文献   

20.
The Callsign Acquisition Test (CAT) is a new speech intelligibility test developed by the Human Research and Engineering Directorate of the U.S. Army Research Laboratory (ARL-HRED). CAT uses the phonetic alphabet and digit stimuli combined together to form 126 test items.

Objective

The purpose of this study was to assess the reliability of data collected with shorter versions of CAT.

Design

A total of 5 shorter versions of the original list (CAT-120, CAT-60, CAT-40, CAT-30, and CAT-24) were formed and evaluated using 19 participants. Each of the subsets of CAT was presented in pink noise at signal-to-noise ratios (SNRs) of −6 dB and −9 dB.

Results

Results showed that shortened CAT lists have the capability of providing the same predictive power as the full CAT with good test-retest reliability.

Conclusions

Under the experimental conditions of this study, any of the shorter versions of the CAT can be utilized in place of the full version to reduce testing times with no effect on predictive power.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号