首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Context

Software product lines (SPL) are used in industry to achieve more efficient software development. However, the testing side of SPL is underdeveloped.

Objective

This study aims at surveying existing research on SPL testing in order to identify useful approaches and needs for future research.

Method

A systematic mapping study is launched to find as much literature as possible, and the 64 papers found are classified with respect to focus, research type and contribution type.

Results

A majority of the papers are of proposal research types (64%). System testing is the largest group with respect to research focus (40%), followed by management (23%). Method contributions are in majority.

Conclusions

More validation and evaluation research is needed to provide a better foundation for SPL testing.  相似文献   

2.

Context

Variability management (VM) is one of the most important activities of software product-line engineering (SPLE), which intends to develop software-intensive systems using platforms and mass customization. VM encompasses the activities of eliciting and representing variability in software artefacts, establishing and managing dependencies among different variabilities, and supporting the exploitation of the variabilities for building and evolving a family of software systems. Software product line (SPL) community has allocated huge amount of effort to develop various approaches to dealing with variability related challenges during the last two decade. Several dozens of VM approaches have been reported. However, there has been no systematic effort to study how the reported VM approaches have been evaluated.

Objective

The objectives of this research are to review the status of evaluation of reported VM approaches and to synthesize the available evidence about the effects of the reported approaches.

Method

We carried out a systematic literature review of the VM approaches in SPLE reported from 1990s until December 2007.

Results

We selected 97 papers according to our inclusion and exclusion criteria. The selected papers appeared in 56 publication venues. We found that only a small number of the reviewed approaches had been evaluated using rigorous scientific methods. A detailed investigation of the reviewed studies employing empirical research methods revealed significant quality deficiencies in various aspects of the used quality assessment criteria. The synthesis of the available evidence showed that all studies, except one, reported only positive effects.

Conclusion

The findings from this systematic review show that a large majority of the reported VM approaches have not been sufficiently evaluated using scientifically rigorous methods. The available evidence is sparse and the quality of the presented evidence is quite low. The findings highlight the areas in need of improvement, i.e., rigorous evaluation of VM approaches. However, the reported evidence is quite consistent across different studies. That means the proposed approaches may be very beneficial when they are applied properly in appropriate situations. Hence, it can be concluded that further investigations need to pay more attention to the contexts under which different approaches can be more beneficial.  相似文献   

3.

Context

In the long run, features of a software product line (SPL) evolve with respect to changes in stakeholder requirements and system contexts. Neither domain engineering nor requirements engineering handles such co-evolution of requirements and contexts explicitly, making it especially hard to reason about the impact of co-changes in complex scenarios.

Objective

In this paper, we propose a problem-oriented and value-based analysis method for variability evolution analysis. The method takes into account both kinds of changes (requirements and contexts) during the life of an evolving software product line.

Method

The proposed method extends the core requirements engineering ontology with the notions to represent variability-intensive problem decomposition and evolution. On the basis of problemorientation, the analysis method identifies candidate changes, detects influenced features, and evaluates their contributions to the value of the SPL.

Results and Conclusion

The process of applying the analysis method is illustrated using a concrete case study of an evolving enterprise software system, which has confirmed that tracing back to requirements and contextual changes is an effective way to understand the evolution of variability in the software product line.  相似文献   

4.

Context

It is important for Product Line Architectures (PLA) to remain stable accommodating evolutionary changes of stakeholder’s requirements. Otherwise, architectural modifications may have to be propagated to products of a product line, thereby increasing maintenance costs. A key challenge is that several features are likely to exert a crosscutting impact on the PLA decomposition, thereby making it more difficult to preserve its stability in the presence of changes. Some researchers claim that the use of aspects can ameliorate instabilities caused by changes in crosscutting features. Hence, it is important to understand which aspect-oriented (AO) and non-aspect-oriented techniques better cope with PLA stability through evolution.

Objective

This paper evaluates the positive and negative change impact of component and aspect based design on PLAs. The objective of the evaluation is to assess how aspects and components promote PLA stability in the presence of various types of evolutionary change. To support a broader analysis, we also evaluate the PLA stability of a hybrid approach (i.e. combined use of aspects and components) against the isolated use of component-based, OO, and AO approaches.

Method

An quantitative and qualitative analysis of PLA stability which involved four different implementations of a PLA: (i) an OO implementation, (ii) an AO implementation, (iii) a component-based implementation, and (iv) a hybrid implementation where both components and aspects are employed. Each implementation has eight releases and they are functionally equivalent. We used conventional metrics suites for change impact and modularity to measure the architecture stability evaluation of the 4 implementations.

Results

The combination of aspects and components promotes superior PLA resilience than the other PLAs in most of the circumstances.

Conclusion

It is concluded that the combination of aspects and components supports the design of high cohesive and loosely coupled PLAs. It also contributes to improve modularity by untangling feature implementation.  相似文献   

5.

Context

Service Oriented Architectures (SOA) have emerged as a new paradigm to develop interoperable and highly dynamic applications.

Objective

This paper aims to identify the state of the art in the research on testing in Service Oriented Architectures with dynamic binding.

Method

A mapping study has been performed employing both manual and automatic search in journals, conference/workshop proceedings and electronic databases.

Results

A total of 33 studies have been reviewed in order to extract relevant information regarding a previously defined set of research questions. The detection of faults and the decision making based on the information gathered from the tests have been identified as the main objectives of these studies. To achieve these goals, monitoring and test case generation are the most proposed techniques testing both functional and non-functional properties. Furthermore, different stakeholders have been identified as participants in the tests, which are performed in specific points in time during the life cycle of the services. Finally, it has been observed that a relevant group of studies have not validated their approach yet.

Conclusions

Although we have only found 33 studies that address the testing of SOA where the discovery and binding of the services are performed at runtime, this number can be considered significant due to the specific nature of the reviewed topic. The results of this study have contributed to provide a body of knowledge that allows identifying current gaps in improving the quality of the dynamic binding in SOA using testing approaches.  相似文献   

6.
7.

Context

The loose coupling of services and Service-Based Applications (SBAs) have made them the ideal platform for context-based run-time adaptation. There has been a lot of research into implementation techniques for adapting SBAs, without much effort focused on the software process required to guide the adaptation.

Objective

This paper aims to bridge that gap by providing an empirically grounded software process model that can be used by software practitioners who want to build adaptable SBAs. The process model will focus only on the adaptation specific issues.

Method

The process model presented in this paper is based on data collected through interviews with 10 practitioners occupying various roles within eight different companies. The data was analyzed using qualitative data analysis techniques. We used the output to develop a set of activities, tasks, stakeholders and artifacts that were used to construct the process model.

Results

The outcome of the data analysis process was a process model identifying nine sets of adaptation process attributes. These can be used in conjunction with an organisation’s existing development life-cycle or another reference life-cycle.

Conclusion

The process model developed in this paper provides a solid reference for practitioners who are planning to develop adaptable SBAs. It has advantages over similar approaches in that it focuses on software process rather than the specific adaptation mechanism implementation techniques.  相似文献   

8.
Towards an ontology-based retrieval of UML Class Diagrams   总被引:1,自引:0,他引:1  

Context

Software Reuse has always been an important area amongst software companies in order to increase their productivity and the quality of their products, but code reuse is not the only answer for this. Nowadays, reuse techniques proposals include software designs or even software specifications. Therefore, this research focuses on software design, specifically on UML Class Diagrams. A semantic technology has been applied to facilitate the retrieval process for an effective reuse.

Objective

This research proposes an ontology-based retrieval technique by semantic similarity in order to support effective retrieval process for UML Class Diagrams. Since UML Class Diagrams are a de facto standard in the design stages of a Software Development Process, a good technique is needed to reuse them, i.e. reusing during the design stage instead of just the coding stages.

Method

An application ontology modeled using UML specifications was designed to compare UML Class Diagram element types. To measure their similarity, a survey was conducted amongst UML experts. Query expansion was improved by a domain ontology supporting the retrieval phase. The calculus of minimal distances in ontologies was solved using a shortest path algorithm.

Results

The case study shows the domain ontology importance in the UML Class Diagram retrieval process as well as the importance of an element type expansion method, such as an application ontology. A correlation between the query complexity and retrieved elements has been identified, by analyzing results. Finally, a positive Return of Investment (ROI) was estimated using Poulin’s Model.

Conclusion

Because Software Reuse has not to be limited to the coding stage, approaches to reuse design stage must be developed, i.e. UML Class Diagrams reuse. This approach proposes a technique for UML Class Diagrams retrieval, which is one important step towards reuse. Semantic technology combined with information retrieval improves the retrieval results.  相似文献   

9.

Context

In training disciplined software development, the PSP is said to result in such effect as increased estimation accuracy, better software quality, earlier defect detection, and improved productivity. But a systematic mechanism that can be easily adopted to assess and interpret PSP effect is scarce within the existing literature.

Objective

The purpose of this study is to explore the possibility of devising a feasible assessment model that ties up critical software engineering values with the pertinent PSP metrics.

Method

A systematic review of the literature was conducted to establish such an assessment model (we called a Plan-Track-Review model). Both mean and median approaches along with a set of simplified procedures were used to assess the commonly accepted PSP training effects. A set of statistical analyses further followed to increase understanding of the relationships among the PSP metrics and to help interpret the application results.

Results

Based on the results of this study, PSP training effect on the controllability, manageability, and reliability of a software engineer is quite positive and largely consistent with the literature. However, its effect on one’s predictability on project in general (and on project size in particular) is not implied as said in the literature. As for one’s overall project efficiency, our results show a moderate improvement. Our initial finding also suggests that a prior stage PSP effect could have an impact on later stage training outcomes.

Conclusion

It is concluded that this Plan-Track-Review model with the associated framework can be used to assess PSP effect regarding a disciplined software development. The generated summary report serves to provide useful feedback for both PSP instructors and students based on internal as well as external standards.  相似文献   

10.

Context

The field of mutation analysis has been growing, both in the number of published papers and the number of active researchers. This special issue provides a sampling of recent advances and ideas. But do all the new researchers know where we started?

Objective

To imagine where we are going, we must first know where we are. To understand where we are, we must know where we have been. This paper reviews past mutation analysis research, considers the present, then imagines possible future directions.

Method

A retrospective study of past trends lets us the ability to see the current state of mutation research in a clear context, allowing us to imagine and then create future vectors.

Results

The value of mutation has greatly expanded since the early view of mutation as an expensive way to unit test subroutines. Our understanding of what mutation is and how it can help has become much deeper and broader.

Conclusion

Mutation analysis has been around for 35 years, but we are just now beginning to see its full potential. The papers in this issue and future mutation workshops will eventually allow us to realize this potential.  相似文献   

11.

Context

Software development effort estimation (SDEE) is the process of predicting the effort required to develop a software system. In order to improve estimation accuracy, many researchers have proposed machine learning (ML) based SDEE models (ML models) since 1990s. However, there has been no attempt to analyze the empirical evidence on ML models in a systematic way.

Objective

This research aims to systematically analyze ML models from four aspects: type of ML technique, estimation accuracy, model comparison, and estimation context.

Method

We performed a systematic literature review of empirical studies on ML model published in the last two decades (1991-2010).

Results

We have identified 84 primary studies relevant to the objective of this research. After investigating these studies, we found that eight types of ML techniques have been employed in SDEE models. Overall speaking, the estimation accuracy of these ML models is close to the acceptable level and is better than that of non-ML models. Furthermore, different ML models have different strengths and weaknesses and thus favor different estimation contexts.

Conclusion

ML models are promising in the field of SDEE. However, the application of ML models in industry is still limited, so that more effort and incentives are needed to facilitate the application of ML models. To this end, based on the findings of this review, we provide recommendations for researchers as well as guidelines for practitioners.  相似文献   

12.

Context

Software maintenance is an important software engineering activity that has been reported to account for the majority of the software total cost. Thus, understanding the factors that influence the cost of software maintenance tasks helps maintainers to make informed decisions about their work.

Objective

This paper describes a controlled experiment of student programmers performing maintenance tasks on a C++ program. The objective of the study is to assess the maintenance size, effort, and effort distributions of three different maintenance types and to describe estimation models to predict the programmer’s effort spent on maintenance tasks.

Method

Twenty-three graduate students and a senior majoring in computer science participated in the experiment. Each student was asked to perform maintenance tasks required for one of the three task groups. The impact of different LOC metrics on maintenance effort was also evaluated by fitting the data collected into four estimation models.

Results

The results indicate that corrective maintenance is much less productive than enhancive and reductive maintenance and program comprehension activities require as much as 50% of the total effort in corrective maintenance. Moreover, the best software effort model can estimate the time of 79% of the programmers with the error of or less than 30%.

Conclusion

Our study suggests that the LOC added, modified, and deleted metrics are good predictors for estimating the cost of software maintenance. Effort estimation models for maintenance work may use the LOC added, modified, deleted metrics as the independent parameters instead of the simple sum of the three. Another implication is that reducing business rules of the software requires a sizable proportion of the software maintenance effort. Finally, the differences in effort distribution among the maintenance types suggest that assigning maintenance tasks properly is important to effectively and efficiently utilize human resources.  相似文献   

13.

Context

Assessing software quality at the early stages of the design and development process is very difficult since most of the software quality characteristics are not directly measurable. Nonetheless, they can be derived from other measurable attributes. For this purpose, software quality prediction models have been extensively used. However, building accurate prediction models is hard due to the lack of data in the domain of software engineering. As a result, the prediction models built on one data set show a significant deterioration of their accuracy when they are used to classify new, unseen data.

Objective

The objective of this paper is to present an approach that optimizes the accuracy of software quality predictive models when used to classify new data.

Method

This paper presents an adaptive approach that takes already built predictive models and adapts them (one at a time) to new data. We use an ant colony optimization algorithm in the adaptation process. The approach is validated on stability of classes in object-oriented software systems and can easily be used for any other software quality characteristic. It can also be easily extended to work with software quality predictive problems involving more than two classification labels.

Results

Results show that our approach out-performs the machine learning algorithm C4.5 as well as random guessing. It also preserves the expressiveness of the models which provide not only the classification label but also guidelines to attain it.

Conclusion

Our approach is an adaptive one that can be seen as taking predictive models that have already been built from common domain data and adapting them to context-specific data. This is suitable for the domain of software quality since the data is very scarce and hence predictive models built from one data set is hard to generalize and reuse on new data.  相似文献   

14.

Context

Although agile software development methods such as SCRUM and DSDM are gaining popularity, the consequences of applying agile principles to software product management have received little attention until now.

Objective

In this paper, this gap is filled by the introduction of a method for the application of SCRUM principles to software product management.

Method

A case study research approach is employed to describe and evaluate this method.

Results

This has resulted in the ‘agile requirements refinery’, an extension to the SCRUM process that enables product managers to cope with complex requirements in an agile development environment. A case study is presented to illustrate how agile methods can be applied to software product management.

Conclusions

The experiences of the case study company are provided as a set of lessons learned that will help others to apply agile principles to their software product management process.  相似文献   

15.

Context

Testing is an essential part of the development life-cycle of any software product. While most phases of data warehouse design have received considerable attention in the literature, not much has been written about data warehouse testing.

Objective

In this paper we propose a comprehensive approach to testing data warehouse systems. Its main features are earliness with respect to the life-cycle, modularity, tight coupling with design, scalability, and measurability through proper metrics.

Method

We introduce a number of specific testing activities, we classify them in terms of what is tested and how it is tested, and we show how they can be framed within a prototype-based methodology. We apply our approach to a real case study for a large retail company.

Results

The case study we faced, based on an iterative prototype-based medium-size project, confirmed the validity of our approach. In particular, the main benefits were obtained in terms of project transparency, coordination of the development team, and organization of design activities.

Conclusion

Though some general-purpose testing techniques can be applied to data warehouse projects, the effectiveness of testing can be largely improved by applying specifically-devised techniques and metrics.  相似文献   

16.
17.

Context

A particular strength of agile systems development approaches is that they encourage a move away from ‘introverted’ development, involving the customer in all areas of development, leading to more innovative and hence more valuable information system. However, a move toward open innovation requires a focus that goes beyond a single customer representative, involving a broader range of stakeholders, both inside and outside the organisation in a continuous, systematic way.

Objective

This paper provides an in-depth discussion of the applicability and implications of open innovation in an agile environment.

Method

We draw on two illustrative cases from industry.

Results

We highlight some distinct problems that arose when two project teams tried to combine agile and open innovation principles. For example, openness is often compromised by a perceived competitive element and lack of transparency between business units. In addition, minimal documentation often reduce effective knowledge transfer while the use of short iterations, stand-up meetings and presence of on-site customer reduce the amount of time for sharing ideas outside the team.

Conclusion

A clear understanding of the inter- and intra-organisational applicability and implications of open innovation in agile systems development is required to address key challenges for research and practice.  相似文献   

18.

Context

There is surprisingly little empirical software engineering research (ESER) that has analysed and reported the rich, fine-grained behaviour of phenomena over time using qualitative and quantitative data. The ESER community also increasingly recognises the need to develop theories of software engineering phenomena e.g. theories of the actual behaviour of software projects at the level of the project and over time.

Objective

To examine the use of the longitudinal, chronological case study (LCCS) as a research strategy for investigating the rich, fine-grained behaviour of phenomena over time using qualitative and quantitative data.

Method

Review the methodological literature on longitudinal case study. Define the LCCS and demonstrate the development and application of the LCCS research strategy to the investigation of Project C, a software development project at IBM Hursley Park. Use the study to consider prospects for LCCSs, and to make progress on a theory of software project behaviour.

Results

LCCSs appear to provide insights that are hard to achieve using existing research strategies, such as the survey study. The LCCS strategy has basic requirements that data is time-indexed, relatively fine-grained and collected contemporaneous to the events to which the data refer. Preliminary progress is made on a theory of software project behaviour.

Conclusion

LCCS appears well suited to analysing and reporting rich, fine-grained behaviour of phenomena over time.  相似文献   

19.

Aim

In this article, factors influencing the motivation of software engineers is studied with the goal of guiding the definition of motivational programs.

Method

Using a set of 20 motivational factors compiled in a systematic literature review and a general theory of motivation, a survey questionnaire was created to evaluate the influence of these factors on individual motivation. Then, the questionnaire was applied on a semi-random sample of 176 software engineers from 20 software companies located in Recife-PE, Brazil.

Results

The survey results show the actual level of motivation for each motivator in the target population. Using principal component analysis on the values of all motivators, a five factor structure was identified and used to propose a guideline for the creation of motivational programs for software engineers.

Conclusions

The five factor structure provides an intuitive categorization for the set of variables and can be used to explain other motivational models presented in the literature. This contributes to a better understanding of motivation in software engineering.  相似文献   

20.

Context

Frequent changes to groups of software entities belonging to different parts of the system may indicate unwanted couplings between those parts. Visualizations of co-changing software entities have been proposed to help developers identify unwanted couplings. Identifying unwanted couplings, however, is only the first step towards an important goal of a software architect: to improve the decomposition of the software system. An in-depth analysis of co-changing entities is needed to understand the underlying reasons for co-changes, and also determine how to resolve the issues.

Objective

In this paper we discuss how interactive visualizations can support the process of analyzing the identified unwanted couplings.

Method

We applied a tool that interactively visualizes software evolution in 10 working sessions with architects and developers of a large embedded software system having a development history of more than a decade.

Results

The participants of the working sessions were overall very positive about their experiences with the interactive visualizations. In 70% of the cases investigated, a decision could be taken on how to resolve the unwanted couplings.

Conclusion

Our experience suggests that interactive visualization not only helps to identify unwanted couplings but it also helps experts to reason about and resolve them.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号