首页 | 本学科首页   官方微博 | 高级检索  
     


Evaluating prediction systems in software project estimation
Affiliation:1. Simula Metropolitan, Center for Digital Engineering, P.O. Box 134 1325 Lysaker, Oslo 0167, Norway;2. Department of Informatics, University of Oslo, Oslo 0316, Norway
Abstract:ContextSoftware engineering has a problem in that when we empirically evaluate competing prediction systems we obtain conflicting results.ObjectiveTo reduce the inconsistency amongst validation study results and provide a more formal foundation to interpret results with a particular focus on continuous prediction systems.MethodA new framework is proposed for evaluating competing prediction systems based upon (1) an unbiased statistic, Standardised Accuracy, (2) testing the result likelihood relative to the baseline technique of random ‘predictions’, that is guessing, and (3) calculation of effect sizes.ResultsPreviously published empirical evaluations of prediction systems are re-examined and the original conclusions shown to be unsafe. Additionally, even the strongest results are shown to have no more than a medium effect size relative to random guessing.ConclusionsBiased accuracy statistics such as MMRE are deprecated. By contrast this new empirical validation framework leads to meaningful results. Such steps will assist in performing future meta-analyses and in providing more robust and usable recommendations to practitioners.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号