首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到13条相似文献,搜索用时 7 毫秒
1.
The ability to accurately and consistently estimate software development efforts is required by the project managers in planning and conducting software development activities. Since software effort drivers are vague and uncertain, software effort estimates, especially in the early stages of the development life cycle, are prone to a certain degree of estimation errors. A software effort estimation model which adopts a fuzzy inference method provides a solution to fit the uncertain and vague properties of software effort drivers. The present paper proposes a fuzzy neural network (FNN) approach for embedding artificial neural network into fuzzy inference processes in order to derive the software effort estimates. Artificial neural network is utilized to determine the significant fuzzy rules in fuzzy inference processes. We demonstrated our approach by using the 63 historical project data in the well-known COCOMO model. Empirical results showed that applying FNN for software effort estimates resulted in slightly smaller mean magnitude of relative error (MMRE) and probability of a project having a relative error of less than or equal to 0.25 (Pred(0.25)) as compared with the results obtained by just using artificial neural network and the original model. The proposed model can also provide objective fuzzy effort estimation rule sets by adopting the learning mechanism of the artificial neural network.  相似文献   

2.
Simplifying effort estimation based on Use Case Points   总被引:1,自引:0,他引:1  

Context

The Use Case Points (UCP) method can be used to estimate software development effort based on a use-case model and two sets of adjustment factors relating to the environmental and technical complexity of a project. The question arises whether all of these components are important from the effort estimation point of view.

Objective

This paper investigates the construction of UCP in order to find possible ways of simplifying it.

Method

The cross-validation procedure was used to compare the accuracy of the different variants of UCP (with and without the investigated simplifications). The analysis was based on data derived from a set of 14 projects for which effort ranged from 277 to 3593 man-hours. In addition, the factor analysis was performed to investigate the possibility of reducing the number of adjustment factors.

Results

The two variants of UCP - with and without unadjusted actor weights (UAW) provided similar prediction accuracy. In addition, a minor influence of the adjustment factors on the accuracy of UCP was observed. The results of the factor analysis indicated that the number of adjustment factors could be reduced from 21 to 6 (2 environmental factors and 4 technical complexity factors). Another observation was made that the variants of UCP calculated based on steps were slightly more accurate than the variants calculated based on transactions. Finally, a recently proposed use-case-based size metric TTPoints provided better accuracy than any of the investigated variants of UCP.

Conclusion

The observation in this study was that the UCP method could be simplified by rejecting UAW; calculating UCP based on steps instead of transactions; or just counting the total number of steps in use cases. Moreover, two recently proposed use-case-based size metrics Transactions and TTPoints could be used as an alternative to UCP to estimate effort at the early stages of software development.  相似文献   

3.
Analogy-based software effort estimation using Fuzzy numbers   总被引:1,自引:0,他引:1  

Background

Early stage software effort estimation is a crucial task for project bedding and feasibility studies. Since collected data during the early stages of a software development lifecycle is always imprecise and uncertain, it is very hard to deliver accurate estimates. Analogy-based estimation, which is one of the popular estimation methods, is rarely used during the early stage of a project because of uncertainty associated with attribute measurement and data availability.

Aims

We have integrated analogy-based estimation with Fuzzy numbers in order to improve the performance of software project effort estimation during the early stages of a software development lifecycle, using all available early data. Particularly, this paper proposes a new software project similarity measure and a new adaptation technique based on Fuzzy numbers.

Method

Empirical evaluations with Jack-knifing procedure have been carried out using five benchmark data sets of software projects, namely, ISBSG, Desharnais, Kemerer, Albrecht and COCOMO, and results are reported. The results are compared to those obtained by methods employed in the literature using case-based reasoning and stepwise regression.

Results

In all data sets the empirical evaluations have shown that the proposed similarity measure and adaptation techniques method were able to significantly improve the performance of analogy-based estimation during the early stages of software development. The results have also shown that the proposed method outperforms some well know estimation techniques such as case-based reasoning and stepwise regression.

Conclusions

It is concluded that the proposed estimation model could form a useful approach for early stage estimation especially when data is almost uncertain.  相似文献   

4.
5.
Use cases constitute a popular technique to problem analysis, partly due to their focus on thinking in terms of the user needs. However this is not a guarantee for discovering all the subproblems that compose the structure of a given software problem. Moreover, a rigorous application of the technique requires a previous consensus about the meaning of I. Jacobson's statement “a use case must give a measurable value to a particular actor” (The Rational Edge, March 2003). This paper proposes a particular characterisation of the concept of “value” with the purpose of problem structuring. To this aim we base on the catalogue of frames for real software problems proposed by M. Jackson (Problem Frames, 2001) and we reason about what could be valuable for the user on each problem class. We illustrate our technique with the analysis of a web auction problem.  相似文献   

6.
Accurate estimation of software development effort is strongly associated with the success or failure of software projects. The clear lack of convincing accuracy and flexibility in this area has attracted the attention of researchers over the past few years. Despite improvements achieved in effort estimating, there is no strong agreement as to which individual model is the best. Recent studies have found that an accurate estimation of development effort in software projects is unreachable in global space, meaning that proposing a high performance estimation model for use in different types of software projects is likely impossible. In this paper, a localized multi-estimator model, called LMES, is proposed in which software projects are classified based on underlying attributes. Different clusters of projects are then locally investigated so that the most accurate estimators are selected for each cluster. Unlike prior models, LMES does not rely on only one individual estimator in a cluster of projects. Rather, an exhaustive investigation is conducted to find the best combination of estimators to assign to each cluster. The investigation domain includes 10 estimators combined using four combination methods, which results in 4017 different combinations. ISBSG, Maxwell and COCOMO datasets are utilized for evaluation purposes, which include a total of 573 real software projects. The promising results show that the estimate accuracy is improved through localization of estimation process and allocation of appropriate estimators. Besides increased accuracy, the significant contribution of LMES is its adaptability and flexibility to deal with the complexity and uncertainty that exist in the field of software development effort estimation.  相似文献   

7.
CRISP-DM is the standard to develop Data Mining projects. CRISP-DM proposes processes and tasks that you have to carry out to develop a Data Mining project. A task proposed by CRISP-DM is the cost estimation of the Data Mining project.  相似文献   

8.
ContextAlong with expert judgment, analogy-based estimation, and algorithmic methods (such as Function point analysis and COCOMO), Least Squares Regression (LSR) has been one of the most commonly studied software effort estimation methods. However, an effort estimation model using LSR, a single LSR model, is highly affected by the data distribution. Specifically, if the data set is scattered and the data do not sit closely on the single LSR model line (do not closely map to a linear structure) then the model usually shows poor performance. In order to overcome this drawback of the LSR model, a data partitioning-based approach can be considered as one of the solutions to alleviate the effect of data distribution. Even though clustering-based approaches have been introduced, they still have potential problems to provide accurate and stable effort estimates.ObjectiveIn this paper, we propose a new data partitioning-based approach to achieve more accurate and stable effort estimates via LSR. This approach also provides an effort prediction interval that is useful to describe the uncertainty of the estimates.MethodEmpirical experiments are performed to evaluate the performance of the proposed approach by comparing with the basic LSR approach and clustering-based approaches, based on industrial data sets (two subsets of the ISBSG (Release 9) data set and one industrial data set collected from a banking institution).ResultsThe experimental results show that the proposed approach not only improves the accuracy of effort estimation more significantly than that of other approaches, but it also achieves robust and stable results according to the degree of data partitioning.ConclusionCompared with the other considered approaches, the proposed approach shows a superior performance by alleviating the effect of data distribution that is a major practical issue in software effort estimation.  相似文献   

9.
ContextMost research in software effort estimation has not considered chronology when selecting projects for training and testing sets. A chronological split represents the use of a projects starting and completion dates, such that any model that estimates effort for a new project p only uses as training data projects that were completed prior to p’s start. Four recent studies investigated the use of chronological splits, using moving windows wherein only the most recent projects completed prior to a projects starting date were used as training data. The first three studies (S1–S3) found some evidence in favor of using windows; they all defined window sizes as being fixed numbers of recent projects. In practice, we suggest that estimators think in terms of elapsed time rather than the size of the data set, when deciding which projects to include in a training set. In the fourth study (S4) we showed that the use of windows based on duration can also improve estimation accuracy.ObjectiveThis papers contribution is to extend S4 using an additional dataset, and to also investigate the effect on accuracy when using moving windows of various durations.MethodStepwise multivariate regression was used to build prediction models, using all available training data, and also using windows of various durations to select training data. Accuracy was compared based on absolute residuals and MREs; the Wilcoxon test was used to check statistical significances between results. Accuracy was also compared against estimates derived from windows containing fixed numbers of projects.ResultsNeither fixed size nor fixed duration windows provided superior estimation accuracy in the new data set.ConclusionsContrary to intuition, our results suggest that it is not always beneficial to exclude old data when estimating effort for new projects. When windows are helpful, windows based on duration are effective.  相似文献   

10.
Eliciting security requirements with misuse cases   总被引:2,自引:5,他引:2  
  相似文献   

11.
This research involves a methodology and associated proof of concept tool to partially automate software validation by comparing UML use cases with particular execution scenarios in source code. These execution scenarios are represented as the internal documentation (identifier names and comments) associated with sequences of execution in static call graphs. This methodology has the potential to reduce validation time and associated costs in many organizations, by enabling quick and easy validation of software relative to the use cases that describe the requirements. The proof of concept tool as it currently stands is intended as an aid to an IV&V software engineer, to assist in directing the software validation process. The approach is lightweight and easily implemented.
William E. Hughes Jr.Email:
  相似文献   

12.
In biomedical studies, researchers are often interested in assessing the association between one or more ordinal explanatory variables and an outcome variable, at the same time adjusting for covariates of any type. The outcome variable may be continuous, binary, or represent censored survival times. In the absence of precise knowledge of the response function, using monotonicity constraints on the ordinal variables improves efficiency in estimating parameters, especially when sample sizes are small. An active set algorithm that can efficiently compute such estimators is proposed, and a characterization of the solution is provided. Having an efficient algorithm at hand is especially relevant when applying likelihood ratio tests in restricted generalized linear models, where one needs the value of the likelihood at the restricted maximizer. The algorithm is illustrated on a real life data set from oncology.  相似文献   

13.
Reading methods for software inspections are used for aiding reviewers to focus on special aspects in a software artefact. Many experiments were conducted for checklist-based reading and scenario-based reading concluding that the focus is important for software reviewers. This paper describes and evaluates a reading technique called usage-based reading (UBR). UBR utilises prioritised use cases to guide reviewers through an inspection. More importantly, UBR drives the reviewers to focus on the software parts that are most important for a user. An experiment was conducted on 27 third year Bachelor's software engineering students, where one group used use cases sorted in a prioritised order and the control group used randomly ordered use cases. The main result is that reviewers in the group with prioritised use cases are significantly more efficient and effective in detecting the most critical faults from a user's point of view. Consequently, UBR has the potential to become an important reading technique. Future extensions to the reading technique are suggested and experiences gained from the experiment to support replications are provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号