首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Context

There is surprisingly little empirical software engineering research (ESER) that has analysed and reported the rich, fine-grained behaviour of phenomena over time using qualitative and quantitative data. The ESER community also increasingly recognises the need to develop theories of software engineering phenomena e.g. theories of the actual behaviour of software projects at the level of the project and over time.

Objective

To examine the use of the longitudinal, chronological case study (LCCS) as a research strategy for investigating the rich, fine-grained behaviour of phenomena over time using qualitative and quantitative data.

Method

Review the methodological literature on longitudinal case study. Define the LCCS and demonstrate the development and application of the LCCS research strategy to the investigation of Project C, a software development project at IBM Hursley Park. Use the study to consider prospects for LCCSs, and to make progress on a theory of software project behaviour.

Results

LCCSs appear to provide insights that are hard to achieve using existing research strategies, such as the survey study. The LCCS strategy has basic requirements that data is time-indexed, relatively fine-grained and collected contemporaneous to the events to which the data refer. Preliminary progress is made on a theory of software project behaviour.

Conclusion

LCCS appears well suited to analysing and reporting rich, fine-grained behaviour of phenomena over time.  相似文献   

2.

Context

In recent years, many software companies have considered Software Process Improvement (SPI) as essential for successful software development. These companies have also shown special interest in IT Service Management (ITSM). SPI standards have evolved to incorporate ITSM best practices.

Objective

This paper presents a systematic literature review of ITSM Process Improvement initiatives based on the ISO/IEC 15504 standard for process assessment and improvement.

Method

A systematic literature review based on the guidelines proposed by Kitchenham and the review protocol template developed by Biolchini et al. is performed.

Results

Twenty-eight relevant studies related to ITSM Process Improvement have been found. From the analysis of these studies, nine different ITSM Process Improvement initiatives have been detected. Seven of these initiatives use ISO/IEC 15504 conformant process assessment methods.

Conclusion

During the last decade, in order to satisfy the on-going demand of mature software development companies for assessing and improving ITSM processes, different models which use the measurement framework of ISO/IEC 15504 have been developed. However, it is still necessary to define a method with the necessary guidelines to implement both software development processes and ITSM processes reducing the amount of effort, especially because some processes of both categories are overlapped.  相似文献   

3.

Context

Writing software for the current generation of parallel systems requires significant programmer effort, and the community is seeking alternatives that reduce effort while still achieving good performance.

Objective

Measure the effect of parallel programming models (message-passing vs. PRAM-like) on programmer effort.

Design, setting, and subjects

One group of subjects implemented sparse-matrix dense-vector multiplication using message-passing (MPI), and a second group solved the same problem using a PRAM-like model (XMTC). The subjects were students in two graduate-level classes: one class was taught MPI and the other was taught XMTC.

Main outcome measures

Development time, program correctness.

Results

Mean XMTC development time was 4.8 h less than mean MPI development time (95% confidence interval, 2.0-7.7), a 46% reduction. XMTC programs were more likely to be correct, but the difference in correctness rates was not statistically significant (p = .16).

Conclusions

XMTC solutions for this particular problem required less effort than MPI equivalents, but further studies are necessary which examine different types of problems and different levels of programmer experience.  相似文献   

4.

Context

We are strong advocates of evidence-based software engineering (EBSE) in general and systematic literature reviews (SLRs) in particular. We believe it is essential that the SLR methodology is used constructively to support software engineering research.

Objective

This study aims to assess the value of mapping studies which are a form of SLR that aims to identify and categorise the available research on a broad software engineering topic.

Method

We used a multi-case, participant-observer case study using five examples of studies that were based on preceding mapping studies. We also validated our results by contacting two other researchers who had undertaken studies based on preceding mapping studies and by assessing review comments related to our follow-on studies.

Results

Our original case study identified 11 unique benefits that can accrue from basing research on a preceding mapping study of which only two were case specific. We also identified nine problems associated with using preceding mapping studies of which two were case specific. These results were consistent with the information obtained from the validation activities. We did not find an example of an independent research group making use of a mapping study produced by other researchers.

Conclusion

Mapping studies can save time and effort for researchers and provide baselines to assist new research efforts. However, they must be of high quality in terms of completeness and rigour if they are to be a reliable basis for follow-on research.  相似文献   

5.

Context

Software migration—and in particular migration towards the Web and towards distributed architectures—is a challenging and complex activity, and has been particularly relevant in recent years, due to the large number of migration projects the industry had to face off because of the increasing pervasiveness of the Web and of mobile devices.

Objective

This paper reports a survey aimed at identifying the state-of-the-practice of the Italian industry for what concerns the previous experiences in software migration projects—specifically concerning information systems—the adopted tools and the emerging needs and problems.

Method

The study has been carried out among 59 Italian Information Technology companies, and for each company a representative person had to answer an on-line questionnaire concerning migration experiences, pieces of technology involved in migration projects, adopted tools, and problems occurred during the project.

Results

Indicate that migration—especially towards the Web—is highly relevant for Italian IT companies, and that companies tend to increasingly adopt free and open source solutions rather than commercial ones. Results also indicate that the adoption of specific tools for migration is still very limited, either because of the lack of skills and knowledge, or due to the lack of mature and adequate options.

Conclusions

Findings from this survey suggest the need for further technology transfer between academia and industry for the purpose of favoring the adoption of software migration techniques and tools.  相似文献   

6.

Context

One of the difficulties faced by software development Project Managers is estimating the cost and schedule for new projects. Previous industry surveys have concluded that software size and cost estimation is a significant technical area of concern. In order to estimate cost and schedule it is important to have a good understanding of the size of the software product to be developed. There are a number of techniques used to derive software size, with function points being amongst the most documented.

Objective

In this paper we explore the utility of function point software sizing techniques when applied to two levels of software requirements documentation in a commercial software development organisation. The goal of the research is to appraise the value (cost/benefit) which functional sizing techniques can bring to the project planning and management of software projects within a small-to-medium sized software development enterprise (SME).

Method

Functional counts were made at the bid and detailed functional specification stages for each of five commercial projects used in the research. Three variants of the NESMA method were used to determine these function counts. Through a structured interview session, feedback on the sizing results was obtained to evaluate its feasibility and potential future contribution to the company.

Results

The results of our research suggest there is value in performing size estimates at two appropriate stages in the software development lifecycle, with simplified methods providing the optimal return on effort expended.

Conclusion

The ‘Estimated NESMA’ is the most appropriate tool for use in size estimation for the company studied. The use of software sizing provides a valuable contribution which would augment, but not replace, the company’s existing cost estimation approach.  相似文献   

7.
Reliability analysis and optimal version-updating for open source software   总被引:1,自引:0,他引:1  

Context

Although reliability is a major concern of most open source projects, research on this problem has attracted attention only recently. In addition, the optimal version-dating for open source software considering its special properties is not yet discussed.

Objective

In this paper, the reliability analysis and optimal version-updating for open source software are studied.

Method

A modified non-homogeneous Poisson process model is developed for open source software reliability modeling and analysis. Based on this model, optimal version-updating for open source software is investigated as well. In the decision process, the rapid release strategy and the level of reliability are the two most important factors. However, they are essentially contradicting with each other. In order to consider these two conflicting factors simultaneously, a new decision model based on multi-attribute utility theory is proposed.

Results

Our models are tested on the real world data sets from two famous open source projects: Apache and GNOME. It is found that traditional software reliability models provide overestimations of the reliability of open source software. In addition, the proposed decision model can help management to make a rational decision on the optimal version-updating for open source software.

Conclusion

Empirical results reveal that the proposed model for open source software reliability can describe the failure process more accurately. Furthermore, it can be seen that the proposed decision model can assist management to appropriately determine the optimal version-update time for open source software.  相似文献   

8.

Context

Staff turnover in organizations is an important issue that should be taken into account mainly for two reasons:
1.
Employees carry an organization’s knowledge in their heads and take it with them wherever they go
2.
Knowledge accessibility is limited to the amount of knowledge employees want to share

Objective

The aim of this work is to provide a set of guidelines to develop knowledge-based Process Asset Libraries (PAL) to store software engineering best practices, implemented as a wiki.

Method

Fieldwork was carried out in a 2-year training course in agile development. This was validated in two phases (with and without PAL), which were subdivided into two stages: Training and Project.

Results

The study demonstrates that, on the one hand, the learning process can be facilitated using PAL to transfer software process knowledge, and on the other hand, products were developed by junior software engineers with a greater degree of independence.

Conclusion

PAL, as a knowledge repository, helps software engineers to learn about development processes and improves the use of agile processes.  相似文献   

9.

Context

Since the introduction of evidence-based software engineering in 2004, systematic literature review (SLR) has been increasingly used as a method for conducting secondary studies in software engineering. Two tertiary studies, published in 2009 and 2010, identified and analysed 54 SLRs published in journals and conferences in the period between 1st January 2004 and 30th June 2008.

Objective

In this article, our goal was to extend and update the two previous tertiary studies to cover the period between 1st July 2008 and 31st December 2009. We analysed the quality, coverage of software engineering topics, and potential impact of published SLRs for education and practice.

Method

We performed automatic and manual searches for SLRs published in journals and conference proceedings, analysed the relevant studies, and compared and integrated our findings with the two previous tertiary studies.

Results

We found 67 new SLRs addressing 24 software engineering topics. Among these studies, 15 were considered relevant to the undergraduate educational curriculum, and 40 appeared of possible interest to practitioners. We found that the number of SLRs in software engineering is increasing, the overall quality of the studies is improving, and the number of researchers and research organisations worldwide that are conducting SLRs is also increasing and spreading.

Conclusion

Our findings suggest that the software engineering research community is starting to adopt SLRs consistently as a research method. However, the majority of the SLRs did not evaluate the quality of primary studies and fail to provide guidelines for practitioners, thus decreasing their potential impact on software engineering practice.  相似文献   

10.

Context

Source code revision control systems contain vast amounts of data that can be exploited for various purposes. For example, the data can be used as a base for estimating future code maintenance effort in order to plan software maintenance activities. Previous work has extensively studied the use of metrics extracted from object-oriented source code to estimate future coding effort. In comparison, the use of other types of metrics for this purpose has received significantly less attention.

Objective

This paper applies machine learning techniques to unveil predictors of yearly cumulative code churn of software projects on the basis of metrics extracted from revision control systems.

Method

The study is based on a collection of object-oriented code metrics, XML code metrics, and organisational metrics. Several models are constructed with different subsets of these metrics. The predictive power of these models is analysed based on a dataset extracted from eight open-source projects.

Results

The study shows that a code churn estimation model built purely with organisational metrics is superior to one built purely with code metrics. However, a combined model provides the highest predictive power.

Conclusion

The results suggest that code metrics in general, and XML metrics in particular, are complementary to organisational metrics for the purpose of estimating code churn.  相似文献   

11.

Context

Software defect prediction studies usually built models using within-company data, but very few focused on the prediction models trained with cross-company data. It is difficult to employ these models which are built on the within-company data in practice, because of the lack of these local data repositories. Recently, transfer learning has attracted more and more attention for building classifier in target domain using the data from related source domain. It is very useful in cases when distributions of training and test instances differ, but is it appropriate for cross-company software defect prediction?

Objective

In this paper, we consider the cross-company defect prediction scenario where source and target data are drawn from different companies. In order to harness cross company data, we try to exploit the transfer learning method to build faster and highly effective prediction model.

Method

Unlike the prior works selecting training data which are similar from the test data, we proposed a novel algorithm called Transfer Naive Bayes (TNB), by using the information of all the proper features in training data. Our solution estimates the distribution of the test data, and transfers cross-company data information into the weights of the training data. On these weighted data, the defect prediction model is built.

Results

This article presents a theoretical analysis for the comparative methods, and shows the experiment results on the data sets from different organizations. It indicates that TNB is more accurate in terms of AUC (The area under the receiver operating characteristic curve), within less runtime than the state of the art methods.

Conclusion

It is concluded that when there are too few local training data to train good classifiers, the useful knowledge from different-distribution training data on feature level may help. We are optimistic that our transfer learning method can guide optimal resource allocation strategies, which may reduce software testing cost and increase effectiveness of software testing process.  相似文献   

12.

Context

A software reference architecture is a generic architecture for a class of systems that is used as a foundation for the design of concrete architectures from this class. The generic nature of reference architectures leads to a less defined architecture design and application contexts, which makes the architecture goal definition and architecture design non-trivial steps, rooted in uncertainty.

Objective

The paper presents a structured and comprehensive study on the congruence between context, goals, and design of software reference architectures. It proposes a tool for the design of congruent reference architectures and for the analysis of the level of congruence of existing reference architectures.

Method

We define a framework for congruent reference architectures. The framework is based on state of the art results from literature and practice. We validate our framework and its quality as analytical tool by applying it for the analysis of 24 reference architectures. The conclusions from our analysis are compared to the opinions of experts on these reference architectures documented in literature and dedicated communication.

Results

Our framework consists of a multi-dimensional classification space and of five types of reference architectures that are formed by combining specific values from the multi-dimensional classification space. Reference architectures that can be classified in one of these types have better chances to become a success. The validation of our framework confirms its quality as a tool for the analysis of the congruence of software reference architectures.

Conclusion

This paper facilitates software architects and scientists in the inception, design, and application of congruent software reference architectures. The application of the tool improves the chance for success of a reference architecture.  相似文献   

13.

Context

Software development effort estimation (SDEE) is the process of predicting the effort required to develop a software system. In order to improve estimation accuracy, many researchers have proposed machine learning (ML) based SDEE models (ML models) since 1990s. However, there has been no attempt to analyze the empirical evidence on ML models in a systematic way.

Objective

This research aims to systematically analyze ML models from four aspects: type of ML technique, estimation accuracy, model comparison, and estimation context.

Method

We performed a systematic literature review of empirical studies on ML model published in the last two decades (1991-2010).

Results

We have identified 84 primary studies relevant to the objective of this research. After investigating these studies, we found that eight types of ML techniques have been employed in SDEE models. Overall speaking, the estimation accuracy of these ML models is close to the acceptable level and is better than that of non-ML models. Furthermore, different ML models have different strengths and weaknesses and thus favor different estimation contexts.

Conclusion

ML models are promising in the field of SDEE. However, the application of ML models in industry is still limited, so that more effort and incentives are needed to facilitate the application of ML models. To this end, based on the findings of this review, we provide recommendations for researchers as well as guidelines for practitioners.  相似文献   

14.

Context

The context of this research is software process improvement (SPI) in small and medium Web companies.

Objective

The primary objective of this paper is to identify software process improvement (SPI) success factors for small and medium Web companies.

Method

To achieve this goal, we conducted semi-structured, open-ended interviews with 21 participants representing 11 different companies in Pakistan, and analyzed the data qualitatively using the Glaserian strand of grounded theory research procedures. The key steps of these procedures that were employed in this research included open coding, focused coding, theoretical coding, theoretical sampling, constant comparison, and scaling up.

Results

An initial framework of key SPI success factors for small and medium Web companies was proposed, which can be of use for small and medium Web companies engaged in SPI. The paper also differentiates between small and medium Web companies and analyzes crucial SPI requirements for companies operating in the Web development domain.

Conclusion

The results of this work, in particular the use of qualitative techniques - allowed us to obtain rich insight into SPI success factors for small and medium Web companies. Future work comprises the validation of the SPI success factors with small and medium Web companies.  相似文献   

15.

Context

Comparing and contrasting evidence from multiple studies is necessary to build knowledge and reach conclusions about the empirical support for a phenomenon. Therefore, research synthesis is at the center of the scientific enterprise in the software engineering discipline.

Objective

The objective of this article is to contribute to a better understanding of the challenges in synthesizing software engineering research and their implications for the progress of research and practice.

Method

A tertiary study of journal articles and full proceedings papers from the inception of evidence-based software engineering was performed to assess the types and methods of research synthesis in systematic reviews in software engineering.

Results

As many as half of the 49 reviews included in the study did not contain any synthesis. Of the studies that did contain synthesis, two thirds performed a narrative or a thematic synthesis. Only a few studies adequately demonstrated a robust, academic approach to research synthesis.

Conclusion

We concluded that, despite the focus on systematic reviews, there is limited attention paid to research synthesis in software engineering. This trend needs to change and a repertoire of synthesis methods needs to be an integral part of systematic reviews to increase their significance and utility for research and practice.  相似文献   

16.

Context

Although agile software development methods such as SCRUM and DSDM are gaining popularity, the consequences of applying agile principles to software product management have received little attention until now.

Objective

In this paper, this gap is filled by the introduction of a method for the application of SCRUM principles to software product management.

Method

A case study research approach is employed to describe and evaluate this method.

Results

This has resulted in the ‘agile requirements refinery’, an extension to the SCRUM process that enables product managers to cope with complex requirements in an agile development environment. A case study is presented to illustrate how agile methods can be applied to software product management.

Conclusions

The experiences of the case study company are provided as a set of lessons learned that will help others to apply agile principles to their software product management process.  相似文献   

17.
18.
Identifying relevant studies in software engineering   总被引:2,自引:0,他引:2  

Context

Systematic literature review (SLR) has become an important research methodology in software engineering since the introduction of evidence-based software engineering (EBSE) in 2004. One critical step in applying this methodology is to design and execute appropriate and effective search strategy. This is a time-consuming and error-prone step, which needs to be carefully planned and implemented. There is an apparent need for a systematic approach to designing, executing, and evaluating a suitable search strategy for optimally retrieving the target literature from digital libraries.

Objective

The main objective of the research reported in this paper is to improve the search step of undertaking SLRs in software engineering (SE) by devising and evaluating systematic and practical approaches to identifying relevant studies in SE.

Method

We have systematically selected and analytically studied a large number of papers (SLRs) to understand the state-of-the-practice of search strategies in EBSE. Having identified the limitations of the current ad-hoc nature of search strategies used by SE researchers for SLRs, we have devised a systematic and evidence-based approach to developing and executing optimal search strategies in SLRs. The proposed approach incorporates the concept of ‘quasi-gold standard’ (QGS), which consists of collection of known studies, and corresponding ‘quasi-sensitivity’ into the search process for evaluating search performance.

Results

We conducted two participant-observer case studies to demonstrate and evaluate the adoption of the proposed QGS-based systematic search approach in support of SLRs in SE research.

Conclusion

We report their findings based on the case studies that the approach is able to improve the rigor of search process in an SLR, as well as it can serve as a supplement to the guidelines for SLRs in EBSE. We plan to further evaluate the proposed approach using a series of case studies on varying research topics in SE.  相似文献   

19.
A systematic review of software architecture evolution research   总被引:1,自引:0,他引:1  

Context

Software evolvability describes a software system’s ability to easily accommodate future changes. It is a fundamental characteristic for making strategic decisions, and increasing economic value of software. For long-lived systems, there is a need to address evolvability explicitly during the entire software lifecycle in order to prolong the productive lifetime of software systems. For this reason, many research studies have been proposed in this area both by researchers and industry practitioners. These studies comprise a spectrum of particular techniques and practices, covering various activities in software lifecycle. However, no systematic review has been conducted previously to provide an extensive overview of software architecture evolvability research.

Objective

In this work, we present such a systematic review of architecting for software evolvability. The objective of this review is to obtain an overview of the existing approaches in analyzing and improving software evolvability at architectural level, and investigate impacts on research and practice.

Method

The identification of the primary studies in this review was based on a pre-defined search strategy and a multi-step selection process.

Results

Based on research topics in these studies, we have identified five main categories of themes: (i) techniques supporting quality consideration during software architecture design, (ii) architectural quality evaluation, (iii) economic valuation, (iv) architectural knowledge management, and (v) modeling techniques. A comprehensive overview of these categories and related studies is presented.

Conclusion

The findings of this review also reveal suggestions for further research and practice, such as (i) it is necessary to establish a theoretical foundation for software evolution research due to the fact that the expertise in this area is still built on the basis of case studies instead of generalized knowledge; (ii) it is necessary to combine appropriate techniques to address the multifaceted perspectives of software evolvability due to the fact that each technique has its specific focus and context for which it is appropriate in the entire software lifecycle.  相似文献   

20.

Context

Variability management (VM) is one of the most important activities of software product-line engineering (SPLE), which intends to develop software-intensive systems using platforms and mass customization. VM encompasses the activities of eliciting and representing variability in software artefacts, establishing and managing dependencies among different variabilities, and supporting the exploitation of the variabilities for building and evolving a family of software systems. Software product line (SPL) community has allocated huge amount of effort to develop various approaches to dealing with variability related challenges during the last two decade. Several dozens of VM approaches have been reported. However, there has been no systematic effort to study how the reported VM approaches have been evaluated.

Objective

The objectives of this research are to review the status of evaluation of reported VM approaches and to synthesize the available evidence about the effects of the reported approaches.

Method

We carried out a systematic literature review of the VM approaches in SPLE reported from 1990s until December 2007.

Results

We selected 97 papers according to our inclusion and exclusion criteria. The selected papers appeared in 56 publication venues. We found that only a small number of the reviewed approaches had been evaluated using rigorous scientific methods. A detailed investigation of the reviewed studies employing empirical research methods revealed significant quality deficiencies in various aspects of the used quality assessment criteria. The synthesis of the available evidence showed that all studies, except one, reported only positive effects.

Conclusion

The findings from this systematic review show that a large majority of the reported VM approaches have not been sufficiently evaluated using scientifically rigorous methods. The available evidence is sparse and the quality of the presented evidence is quite low. The findings highlight the areas in need of improvement, i.e., rigorous evaluation of VM approaches. However, the reported evidence is quite consistent across different studies. That means the proposed approaches may be very beneficial when they are applied properly in appropriate situations. Hence, it can be concluded that further investigations need to pay more attention to the contexts under which different approaches can be more beneficial.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号