首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Analogy-based software effort estimation using Fuzzy numbers   总被引:1,自引:0,他引:1  

Background

Early stage software effort estimation is a crucial task for project bedding and feasibility studies. Since collected data during the early stages of a software development lifecycle is always imprecise and uncertain, it is very hard to deliver accurate estimates. Analogy-based estimation, which is one of the popular estimation methods, is rarely used during the early stage of a project because of uncertainty associated with attribute measurement and data availability.

Aims

We have integrated analogy-based estimation with Fuzzy numbers in order to improve the performance of software project effort estimation during the early stages of a software development lifecycle, using all available early data. Particularly, this paper proposes a new software project similarity measure and a new adaptation technique based on Fuzzy numbers.

Method

Empirical evaluations with Jack-knifing procedure have been carried out using five benchmark data sets of software projects, namely, ISBSG, Desharnais, Kemerer, Albrecht and COCOMO, and results are reported. The results are compared to those obtained by methods employed in the literature using case-based reasoning and stepwise regression.

Results

In all data sets the empirical evaluations have shown that the proposed similarity measure and adaptation techniques method were able to significantly improve the performance of analogy-based estimation during the early stages of software development. The results have also shown that the proposed method outperforms some well know estimation techniques such as case-based reasoning and stepwise regression.

Conclusions

It is concluded that the proposed estimation model could form a useful approach for early stage estimation especially when data is almost uncertain.  相似文献   

2.

Context

One of the difficulties faced by software development Project Managers is estimating the cost and schedule for new projects. Previous industry surveys have concluded that software size and cost estimation is a significant technical area of concern. In order to estimate cost and schedule it is important to have a good understanding of the size of the software product to be developed. There are a number of techniques used to derive software size, with function points being amongst the most documented.

Objective

In this paper we explore the utility of function point software sizing techniques when applied to two levels of software requirements documentation in a commercial software development organisation. The goal of the research is to appraise the value (cost/benefit) which functional sizing techniques can bring to the project planning and management of software projects within a small-to-medium sized software development enterprise (SME).

Method

Functional counts were made at the bid and detailed functional specification stages for each of five commercial projects used in the research. Three variants of the NESMA method were used to determine these function counts. Through a structured interview session, feedback on the sizing results was obtained to evaluate its feasibility and potential future contribution to the company.

Results

The results of our research suggest there is value in performing size estimates at two appropriate stages in the software development lifecycle, with simplified methods providing the optimal return on effort expended.

Conclusion

The ‘Estimated NESMA’ is the most appropriate tool for use in size estimation for the company studied. The use of software sizing provides a valuable contribution which would augment, but not replace, the company’s existing cost estimation approach.  相似文献   

3.

Context

Software maintenance is an important software engineering activity that has been reported to account for the majority of the software total cost. Thus, understanding the factors that influence the cost of software maintenance tasks helps maintainers to make informed decisions about their work.

Objective

This paper describes a controlled experiment of student programmers performing maintenance tasks on a C++ program. The objective of the study is to assess the maintenance size, effort, and effort distributions of three different maintenance types and to describe estimation models to predict the programmer’s effort spent on maintenance tasks.

Method

Twenty-three graduate students and a senior majoring in computer science participated in the experiment. Each student was asked to perform maintenance tasks required for one of the three task groups. The impact of different LOC metrics on maintenance effort was also evaluated by fitting the data collected into four estimation models.

Results

The results indicate that corrective maintenance is much less productive than enhancive and reductive maintenance and program comprehension activities require as much as 50% of the total effort in corrective maintenance. Moreover, the best software effort model can estimate the time of 79% of the programmers with the error of or less than 30%.

Conclusion

Our study suggests that the LOC added, modified, and deleted metrics are good predictors for estimating the cost of software maintenance. Effort estimation models for maintenance work may use the LOC added, modified, deleted metrics as the independent parameters instead of the simple sum of the three. Another implication is that reducing business rules of the software requires a sizable proportion of the software maintenance effort. Finally, the differences in effort distribution among the maintenance types suggest that assigning maintenance tasks properly is important to effectively and efficiently utilize human resources.  相似文献   

4.
This paper presents results from two case studies and two experiments on how effort estimates impact software project work. The studies indicate that a meaningful interpretation of effort estimation accuracy requires knowledge about the size and nature of the impact of the effort estimates on the software work. For example, we found that projects with high priority on costs and incomplete requirements specifications were prone to adjust the work to fit the estimate when the estimates were too optimistic, while too optimistic estimates led to effort overruns for projects with high priority on quality and well specified requirements.

Two hypotheses were derived from the case studies and tested experimentally. The experiments indicate that: (1) effort estimates can be strongly impacted by anchor values, e.g. early indications on the required effort. This impact is present even when the estimators are told that the anchor values are irrelevant as estimation information; (2) too optimistic effort estimates lead to less use of effort and more errors compared with more realistic effort estimates on programming tasks.  相似文献   


5.

Context

Software development effort estimation (SDEE) is the process of predicting the effort required to develop a software system. In order to improve estimation accuracy, many researchers have proposed machine learning (ML) based SDEE models (ML models) since 1990s. However, there has been no attempt to analyze the empirical evidence on ML models in a systematic way.

Objective

This research aims to systematically analyze ML models from four aspects: type of ML technique, estimation accuracy, model comparison, and estimation context.

Method

We performed a systematic literature review of empirical studies on ML model published in the last two decades (1991-2010).

Results

We have identified 84 primary studies relevant to the objective of this research. After investigating these studies, we found that eight types of ML techniques have been employed in SDEE models. Overall speaking, the estimation accuracy of these ML models is close to the acceptable level and is better than that of non-ML models. Furthermore, different ML models have different strengths and weaknesses and thus favor different estimation contexts.

Conclusion

ML models are promising in the field of SDEE. However, the application of ML models in industry is still limited, so that more effort and incentives are needed to facilitate the application of ML models. To this end, based on the findings of this review, we provide recommendations for researchers as well as guidelines for practitioners.  相似文献   

6.
The effort required to complete software projects is often estimated, completely or partially, using the judgment of experts, whose assessment may be biased. In general, such bias as there is seems to be towards estimates that are overly optimistic. The degree of bias varies from expert to expert, and seems to depend on both conscious and unconscious processes. One possible approach to reduce this bias towards over-optimism is to combine the judgments of several experts. This paper describes an experiment in which experts with different backgrounds combined their estimates in group discussion. First, 20 software professionals were asked to provide individual estimates of the effort required for a software development project. Subsequently, they formed five estimation groups, each consisting of four experts. Each of these groups agreed on a project effort estimate via the pooling of knowledge in discussion. We found that the groups submitted less optimistic estimates than the individuals. Interestingly, the group discussion-based estimates were closer to the effort expended on the actual project than the average of the individual expert estimates were, i.e., the group discussions led to better estimates than a mechanical averaging of the individual estimates. The groups ability to identify a greater number of the activities required by the project is among the possible explanations for this reduction of bias.  相似文献   

7.
Simplifying effort estimation based on Use Case Points   总被引:1,自引:0,他引:1  

Context

The Use Case Points (UCP) method can be used to estimate software development effort based on a use-case model and two sets of adjustment factors relating to the environmental and technical complexity of a project. The question arises whether all of these components are important from the effort estimation point of view.

Objective

This paper investigates the construction of UCP in order to find possible ways of simplifying it.

Method

The cross-validation procedure was used to compare the accuracy of the different variants of UCP (with and without the investigated simplifications). The analysis was based on data derived from a set of 14 projects for which effort ranged from 277 to 3593 man-hours. In addition, the factor analysis was performed to investigate the possibility of reducing the number of adjustment factors.

Results

The two variants of UCP - with and without unadjusted actor weights (UAW) provided similar prediction accuracy. In addition, a minor influence of the adjustment factors on the accuracy of UCP was observed. The results of the factor analysis indicated that the number of adjustment factors could be reduced from 21 to 6 (2 environmental factors and 4 technical complexity factors). Another observation was made that the variants of UCP calculated based on steps were slightly more accurate than the variants calculated based on transactions. Finally, a recently proposed use-case-based size metric TTPoints provided better accuracy than any of the investigated variants of UCP.

Conclusion

The observation in this study was that the UCP method could be simplified by rejecting UAW; calculating UCP based on steps instead of transactions; or just counting the total number of steps in use cases. Moreover, two recently proposed use-case-based size metrics Transactions and TTPoints could be used as an alternative to UCP to estimate effort at the early stages of software development.  相似文献   

8.
Estimating the effort required to complete web-development projects involves input from people in both technical (e.g., programming), and non-technical (e.g., user interaction design) roles. This paper examines how the employees' role and type of competence may affect their estimation strategy and performance. An analysis of actual web-development project data and results from an experiment suggest that people with technical competence provided less realistic project effort estimates than those with less technical competence. This means that more knowledge about how to implement a requirement specification does not always lead to better estimation performance. We discuss, amongst others, two possible reasons for this observation: (1) Technical competence induces a bottom-up, construction-based estimation strategy, while lack of this competence induces a more outside view of the project, using a top-down estimation strategy. An outside view may encourage greater use of the history of previous projects and reduce the bias towards over-optimism. (2) Software professionals in technical roles perceive that they are evaluated as more skilled when providing low effort estimates. A consequence of our findings is that the choice of estimation strategy, estimation evaluation criteria and feedback are important aspects to consider when seeking to improve estimation accuracy.  相似文献   

9.

Context

Source code revision control systems contain vast amounts of data that can be exploited for various purposes. For example, the data can be used as a base for estimating future code maintenance effort in order to plan software maintenance activities. Previous work has extensively studied the use of metrics extracted from object-oriented source code to estimate future coding effort. In comparison, the use of other types of metrics for this purpose has received significantly less attention.

Objective

This paper applies machine learning techniques to unveil predictors of yearly cumulative code churn of software projects on the basis of metrics extracted from revision control systems.

Method

The study is based on a collection of object-oriented code metrics, XML code metrics, and organisational metrics. Several models are constructed with different subsets of these metrics. The predictive power of these models is analysed based on a dataset extracted from eight open-source projects.

Results

The study shows that a code churn estimation model built purely with organisational metrics is superior to one built purely with code metrics. However, a combined model provides the highest predictive power.

Conclusion

The results suggest that code metrics in general, and XML metrics in particular, are complementary to organisational metrics for the purpose of estimating code churn.  相似文献   

10.
Jorgensen  M. 《Software, IEEE》2005,22(3):57-63
This article presents seven guidelines for producing realistic software development effort estimates. The guidelines derive from industrial experience and empirical studies. While many other guidelines exist for software effort estimation, these guidelines differ from them in three ways: 1) They base estimates on expert judgments rather than models. 2) They are easy to implement. 3) They use the most recent findings regarding judgment-based effort estimation. Estimating effort on the basis of expert judgment is the most common approach today, and the decision to use such processes instead of formal estimation models shouldn't be surprising. Simple process changes such as reframing questions can lead to more realistic estimates of software development efforts.  相似文献   

11.
To date most research in software effort estimation has not taken chronology into account when selecting projects for training and validation sets. A chronological split represents the use of a project’s starting and completion dates, such that any model that estimates effort for a new project p only uses as its training set projects that have been completed prior to p’s starting date. A study in 2009 (“S3”) investigated the use of chronological split taking into account a project’s age. The research question investigated was whether the use of a training set containing only the most recent past projects (a “moving window” of recent projects) would lead to more accurate estimates when compared to using the entire history of past projects completed prior to the starting date of a new project. S3 found that moving windows could improve the accuracy of estimates. The study described herein replicates S3 using three different and independent data sets. Estimation models were built using regression, and accuracy was measured using absolute residuals. The results contradict S3, as they do not show any gain in estimation accuracy when using windows for effort estimation. This is a surprising result: the intuition that recent data should be more helpful than old data for effort estimation is not supported. Several factors, which are discussed in this paper, might have contributed to such contradicting results. Some of our future work entails replicating this work using other datasets, to understand better when using windows is a suitable choice for software companies.  相似文献   

12.
专家估算在工作量估算的实际运用中得到广泛的应用,但人们对其估算过程的透明性提出了更高的要求。针对专家估算在这方面的不足,本文提出了一种支持专家估算和类比估算的框架模型,借助Delphi方法,基于过程的分解来减小估算人员的主观影响。该框架简单实用,灵活性强,通过增加估算过程中客观因素的记录,使得估算过程具有较强的透明性和可重复性,适合软件组织实施和推广。  相似文献   

13.
Inaccurate estimates of software development effort is a frequently reported cause of IT-project failures. We report results from a study that investigated the effect of introducing lessons-learned sessions on estimation accuracy and the assessment of uncertainty. Twenty software professionals were randomly allocated to a Learning group or a Control group and instructed to estimate and complete the same five development tasks. Those in the Learning group but not those in the Control group were instructed to spend at least 30 minutes on identifying, analyzing, and summarizing their effort estimation and uncertainty assessment experience after completing each task. We found that the estimation accuracy and the realism of the uncertainty assessment were not better in the Learning group than in the Control group. A follow-up study with 83 software professionals was completed to better understand this lack of improvement from lessons-learned sessions. The follow-up study found that receiving feedback about other software professionals' estimation performance led to more realistic uncertainty assessments than receiving the same feedback of one's own estimates. Lessons-learned sessions, not only in estimation contexts, have to be carefully designed to avoid wasting resources on learning processes that stimulate rather than reduce learning biases.  相似文献   

14.
In 2001 the ISBSG database was used by Jeffery et al. (Using public domain metrics to estimate software development effort. Proceedings Metrics’01, London, pp 16–27, 2001; S1) to compare the effort prediction accuracy between cross- and single-company effort models. Given that more than 2,000 projects were later volunteered to this database, in 2005 Mendes et al. (A replicated comparison of cross-company and within-company effort estimation models using the ISBSG Database, in Proceedings of Metrics’05, Como, 2005; S2) replicated S1 but obtained different results. The difference in results could have occurred due to legitimate differences in data set patterns; however, they could also have occurred due to differences in experimental procedure given that S2 was unable to employ exactly the same experimental procedure used in S1 because S1’s procedure was not fully documented. Recently, we applied S2’s experimental procedure to the ISBSG database version used in S1 (release 6) to assess if differences in experimental procedure would have contributed towards different results (Lokan and Mendes, Cross-company and single-company effort models using the ISBSG Database: a further replicated study, Proceedings of the ISESE’06, pp 75–84, 2006; S3). Our results corroborated those from S1, suggesting that differences in the results obtained by S2 were likely caused by legitimate differences in data set patterns. We have since been able to reconstruct the experimental procedure of S1 and therefore in this paper we present both S3 and also another study (S4), which applied the experimental procedure of S1 to the data set used in S2. By applying the experimental procedure of S2 to the data set used in S1 (study S3), and the experimental procedure of S1 to the data set used in S2 (study S4), we investigate the effect of all the variations between S1 and S2. Our results for S4 support those of S3, suggesting that differences in data preparation and analysis procedures did not affect the outcome of the analysis. Thus, the different results of S1 and S2 are very likely due to fundamental differences in the data sets.
Chris LokanEmail:

Emilia Mendes   is a Computer Science full-time academic at the University of Auckland (New Zealand) where she leads the WETA (Web Engineering, Technology and Applications) research group. She has active research interests in empirical Web and software engineering, and in particular cost and size estimation, productivity and quality measurement and metrics, and evidence-based research. Dr. Mendes is on the programme committee of numerous international conferences and workshops, and on the editorial board of several international journals in Web and Software engineering. Dr. Mendes worked in the software industry for ten years before obtaining her Ph.D. in Computer Science from the University of Southampton (UK), and moving to Auckland. More information can be obtained at http://www.cs.auckland.ac.nz/~emilia Chris Lokan   is a Senior Lecturer at the University of New South Wales (Australian Defence Force Academy campus) in Canberra. His teaching and research concentrate on software engineering and software metrics. His main research interests are software size measures, software effort and cost estimation, and software benchmarking. Recently his research has concentrated on the use of multi-company datasets for estimation, and data mining using genetic algorithms. Chris is a member of the ACM, the Computer Society of the IEEE, and the Australian Software Metrics Association.   相似文献   

15.
The ability to accurately and consistently estimate software development efforts is required by the project managers in planning and conducting software development activities. Since software effort drivers are vague and uncertain, software effort estimates, especially in the early stages of the development life cycle, are prone to a certain degree of estimation errors. A software effort estimation model which adopts a fuzzy inference method provides a solution to fit the uncertain and vague properties of software effort drivers. The present paper proposes a fuzzy neural network (FNN) approach for embedding artificial neural network into fuzzy inference processes in order to derive the software effort estimates. Artificial neural network is utilized to determine the significant fuzzy rules in fuzzy inference processes. We demonstrated our approach by using the 63 historical project data in the well-known COCOMO model. Empirical results showed that applying FNN for software effort estimates resulted in slightly smaller mean magnitude of relative error (MMRE) and probability of a project having a relative error of less than or equal to 0.25 (Pred(0.25)) as compared with the results obtained by just using artificial neural network and the original model. The proposed model can also provide objective fuzzy effort estimation rule sets by adopting the learning mechanism of the artificial neural network.  相似文献   

16.
This paper presents an improvement of an effort estimation method that can be used to predict the level of effort for software development projects. A new estimation approach based on a two-phase algorithm is used. In the first phase, we apply a calculation based on use case points (UCPs). In the second phase, we add correction values (a 1, a 2) obtained via least squares regression. This approach employs historical project data to refine the estimate. By applying the least squares regression approach, the algorithm filters out estimation errors caused by human factors and company practice.  相似文献   

17.
ContextAlong with expert judgment, analogy-based estimation, and algorithmic methods (such as Function point analysis and COCOMO), Least Squares Regression (LSR) has been one of the most commonly studied software effort estimation methods. However, an effort estimation model using LSR, a single LSR model, is highly affected by the data distribution. Specifically, if the data set is scattered and the data do not sit closely on the single LSR model line (do not closely map to a linear structure) then the model usually shows poor performance. In order to overcome this drawback of the LSR model, a data partitioning-based approach can be considered as one of the solutions to alleviate the effect of data distribution. Even though clustering-based approaches have been introduced, they still have potential problems to provide accurate and stable effort estimates.ObjectiveIn this paper, we propose a new data partitioning-based approach to achieve more accurate and stable effort estimates via LSR. This approach also provides an effort prediction interval that is useful to describe the uncertainty of the estimates.MethodEmpirical experiments are performed to evaluate the performance of the proposed approach by comparing with the basic LSR approach and clustering-based approaches, based on industrial data sets (two subsets of the ISBSG (Release 9) data set and one industrial data set collected from a banking institution).ResultsThe experimental results show that the proposed approach not only improves the accuracy of effort estimation more significantly than that of other approaches, but it also achieves robust and stable results according to the degree of data partitioning.ConclusionCompared with the other considered approaches, the proposed approach shows a superior performance by alleviating the effect of data distribution that is a major practical issue in software effort estimation.  相似文献   

18.
More resources are spent on maintaining software than for its development. Maintenance costs for large scale software systems can amount to somewhere between 40 and 67% of the total system life cycle cost. It is therefore important to manage maintenance costs, and to balance costs with benefits. Frequently this task is approached, at least in the literature, merely as a software cost estimation problem. Unfortunately, the creation of effort estimation models for maintenance – a primary requisite for cost calculation – has not yet been satisfactorily addressed. At the same time, project managers do not estimate costs first, but instead prioritize maintenance projects, trying to determine which projects to carry out (first) within their fixed budgets and resource capabilities. This essentially means that cost estimation is done qualitatively first before formal cost estimation techniques are employed. Recognizing the problems associated with standard, regression based estimation models, and focusing on the needs of software project managers, this research studied the process of project prioritization as an expert problem solving and decision making task, through concurrently taken (think aloud) protocols. Analysis of these protocols revealed that experts rarely make use of formal mathematical models to determine project priorities or resource needs, such as COCOMO or FPA, although project size is a key determinant of a project's priority. Instead, estimators qualitatively consider cost or value, urgency, and difficulty of a maintenance task, then prioritize projects accordingly, followed by a decision concerning further treatment of the problem. The process employs case based reasoning and the use of heuristics. While different experts may use different strategies, there exists great overlap in their overall prioritization procedure.  相似文献   

19.
This paper develops interior penalty discontinuous Galerkin (IP-DG) methods to approximate \(W^{2,p}\) strong solutions of second order linear elliptic partial differential equations (PDEs) in non-divergence form with continuous coefficients. The proposed IP-DG methods are closely related to the IP-DG methods for advection-diffusion equations, and they are easy to implement on existing standard IP-DG software platforms. It is proved that the proposed IP-DG methods have unique solutions and converge with optimal rate to the \(W^{2,p}\) strong solution in a discrete \(W^{2,p}\)-norm. The crux of the analysis is to establish a DG discrete counterpart of the Calderon–Zygmund estimate and to adapt a freezing coefficient technique used for the PDE analysis at the discrete level. To obtain such a crucial estimate, we need to establish broken \(W^{1,p}\)-norm error estimates for IP-DG approximations of constant coefficient elliptic PDEs, which is also of independent interest. Numerical experiments are provided to gauge the performance of the proposed IP-DG methods and to validate the theoretical convergence results.  相似文献   

20.

Context

The loose coupling of services and Service-Based Applications (SBAs) have made them the ideal platform for context-based run-time adaptation. There has been a lot of research into implementation techniques for adapting SBAs, without much effort focused on the software process required to guide the adaptation.

Objective

This paper aims to bridge that gap by providing an empirically grounded software process model that can be used by software practitioners who want to build adaptable SBAs. The process model will focus only on the adaptation specific issues.

Method

The process model presented in this paper is based on data collected through interviews with 10 practitioners occupying various roles within eight different companies. The data was analyzed using qualitative data analysis techniques. We used the output to develop a set of activities, tasks, stakeholders and artifacts that were used to construct the process model.

Results

The outcome of the data analysis process was a process model identifying nine sets of adaptation process attributes. These can be used in conjunction with an organisation’s existing development life-cycle or another reference life-cycle.

Conclusion

The process model developed in this paper provides a solid reference for practitioners who are planning to develop adaptable SBAs. It has advantages over similar approaches in that it focuses on software process rather than the specific adaptation mechanism implementation techniques.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号