首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
对软件成本估算方法进行合理评价,进而从当前众多软件成本估算方法中选择最合适的一种是十分重要的.在软件成本估算方法的评价体系的基础上,采用模糊评价与灰色理论相结合的综合评价方法对各估算方法进行评价,进一步通过实例证明该评价方法可以合理有效的评价软件成本估算方法,对软件开发项目实践有参考价值.  相似文献   

2.
针对软件项目前期成本估算不准确问题,通过构建软件项目案例库,提出一种基于CBR的软件项目成本估算方法(CBRCEM)。根据COCOMO模型成本驱动因子理论,确定影响项目成本属性特征;引入归一化效用函数,应用层次分析法计算影响项目成本属性的权重值;通过对常用案例检索算法的比较分析,结合软件成本估算的特性,建立基于改进的灰色关联分析理论的软件项目相似度算法;根据PERT理论估算软件项目成本,使估算结果更为准确。CBRCEM在Windows平台上用JAVA语言开发完成并在实际中加以应用,案例研究结果表明,对于软件项目前期成本估算,该方法能够得到更加准确的评估结果。  相似文献   

3.
软件成本估算是软件开发过程中一项非常重要的活动,但现有的方法在准确估算软件成本方面还存在不足。针对软件成本估算不够准确的现状,提出了一种基于RBF神经网络的软件成本估算模型。该模型采用样本聚类的方法确定隐含层节点数,利用遗传算法对隐层节点中心值和高斯函数的宽度进行优化,利用线性最小二乘法训练网络的权值。实验证明,该模型能够准确有效地估算软件成本。  相似文献   

4.
赵小敏  曹光斌  费梦钰  朱李楠 《计算机科学》2018,45(Z11):501-504, 531
软件成本估算是软件项目开发周期、管理决策和软件项目质量中最重要的问题之一。针对软件研发成本估算在软件行业中普遍存在不准确、难以估算的问题,提出一种基于加权类比的软件成本估算方法,将相似度距离定义为具有相关性的马氏距离,通过优化的粒子群算法优化后得到权值,并用类比法估算软件成本。实验结果表明,该方法 具有 比非加权类比、神经网络等非计算模型方法更高的精确度。实际案例测试表明,该方法在软件开发初期基于需求分析的软件成本估算比专家估算有更精确的评估结果。  相似文献   

5.
软件工程概念从1968年被提出以来,经历了近50年的发展,软件系统规模和复杂程度日益加大,然而从上个世纪70年代左右开始,软件工程领域出现大量软件项目进度延期、预算超支和质量缺陷为典型特征的软件危机.这体现出软件成本估算在软件工程开发过程的重要性.精准的软件成本估算是软件工程按时完成的保证.本文采用一种基于皮尔逊相关系数的相似度量方法,结合TOPSIS方法软件成本进行类比估算以获取与之最接近项目的项目作为参考进行软件成本估算.最后将该方法应用于Desharnais数据集进行实验,并和其他方法进行比较,实验结果表明,本文采用的基于相关系数的软件成本度量方法较已有的相似性度量方法准确度更好.  相似文献   

6.
基于度量工具的软件成本估算模型使用方法   总被引:2,自引:0,他引:2       下载免费PDF全文
输入的主观性以及输入过多是妨碍软件成本估算模型实际应用效果的重要影响因素。针对以上问题,提出了一种基于度量工具的软件成本估算模型使用方法。该方法通过引入统计理论中的工具变量,将度量工具所采集的度量元数据自动转换为软件成本估算模型的输入。这一方面可以避免模型校准和估算过程中输入的主观性与不一致性,提高了估算结果的准确性与可靠性;另一方面能减少估算人员的手工操作,提高工作效率,增加了软件成本估算模型的可用性。结合具体实例说明了所提出方法的可行性与有效性。  相似文献   

7.
随着面向对象和软件复用技术的发展。产生了一种新的基于构件的软件开发方法,被广大软件开发人员普遍看好,发展很快。为了解决基于构件的软件开发成本估算的问题。结合构件的复用成本公式。在分析基于构件的软件开发方法特点的基础上提出了一种基于构件仿真的软件成本估算方法。利用分解法的思想,以传统开发方法的成本为基础,将基于构件的软件开发过程分成若干阶段,分别估算各阶段的成本,最终得到估算结果,为基于构件的软件开发成本估算提供了一个较好方法。最后给出了对该估算方法的评价,指出了今后研究的重点。  相似文献   

8.
基于J2EE结构体系的Web软件在国内已经得到越来越广泛的应用,文章以估算基于J2EE结构体系的Web软件成本为目标,对软件成本估算方法进行了深入研究后,提出文中命名为J2EECost的估算基于J2EE结构体系的Web软件成本估算方法。  相似文献   

9.
作为软件成本控制管理的重要措施,软件开发成本的估算技术已经成为软件工程领域的一个重要课题。当前基于复用的软件开发正在成为软件工程的主流,但将软件复用考虑进成本估算的模型较少。提出一个基于软件复用的成本估算模型,并应用该模型对COCOMO模型进行复用改造,之后通过实例进行验证。给出了使用存储过程技术对模型参数进行修正的策略,以为各类基于复用的软件开发成本估算提供依据。  相似文献   

10.
项目立项阶段由于评审时间有限,需要采用快速近似估算方法获取软件的规模以确定经费预算。本文提出基于专家经验和随机抽样的快速估算方法,将软件分解为不同的软件模块,由开发组织对不同的模块规模和成本进行预估并申报经费,估算人员利用专家经验以及对软件组件随机抽样进行详细度量,再利用快速近似估算方法估算出软件项目的整体规模,进而得到软件整体成本。实际应用表明,该方法是可行的。  相似文献   

11.
In recent years, grey relational analysis (GRA), a similarity-based method, has been proposed and used in many applications. However, we found that most traditional GRA methods only consider nonweighted similarity for predicting software development effort. In fact, nonweighted similarity may cause biased predictions, because each feature of a project may have a different degree of relevance to the development effort. Therefore, this paper proposes six weighted methods, including nonweighted, distance-based, correlative, linear, nonlinear, and maximal weights, to be integrated into GRA for software effort estimation. Numerical examples and sensitivity analyses based on four public datasets are used to show the performance of the proposed methods. The experimental results indicate that the weighted GRA can improve estimation accuracy and reliability from the nonweighted GRA. The results also demonstrate that the weighted GRA performs better than other estimation techniques and published results. In summary, we can conclude that weighted GRA can be a viable and alternative method for predicting software development effort.  相似文献   

12.
This paper proposes a new evolutionary learning method without any algorithmic-specific parameters for solving optimization problems. The proposed method gets inspired from the information set concept that seeks to represent the uncertainty in an effort using an entropy function. This method termed as Human Effort For Achieving Goals (HEFAG) comprises two phases: Emulation and boosting phases. In the Emulation phase the outcome of the best achiever is emulated by each contender. The effort associated with the average outcome and best outcome are converted into information values based on the information set. In the Boosting phase the efforts of all contenders are boosted by adding the differential information values of any two randomly chosen contenders. The proposed method is tested on benchmark standard functions and it is found to outperform some well-known evolutionary methods based on the statistical analysis of the experimental results using the Kruskal-Wallis statistical test and Wilcoxon rank sum test.  相似文献   

13.
Noise estimation and detection algorithms must adapt to a changing environment quickly, so they use a least mean square (LMS) filter. However, there is a downside. An LMS filter is very low, and it consequently lowers speech recognition rates. In order to overcome such a weak point, we propose a method to establish a robust speech recognition clustering model for noisy environments. Since this proposed method allows the cancelation of noise with an average estimator least mean square (AELMS) filter in a noisy environment, a robust speech recognition clustering model can be established. With the AELMS filter, which can preserve source features of speech and decrease the degradation of speech information, noise in a contaminated speech signal gets canceled, and a Gaussian state model is clustered as a method to make noise more robust. By composing a Gaussian clustering model, which is a robust speech recognition clustering model, in a noisy environment, recognition performance was evaluated. The study shows that the signal-to-noise ratio of speech, which was improved by canceling environment noise that kept changing, was enhanced by 2.8 dB on average, and recognition rate improved by 4.1 %.  相似文献   

14.
Model-based performance evaluation methods for software architectures can help architects to assess design alternatives and save costs for late life-cycle performance fixes. A recent trend is component-based performance modelling, which aims at creating reusable performance models; a number of such methods have been proposed during the last decade. Their accuracy and the needed effort for modelling are heavily influenced by human factors, which are so far hardly understood empirically. Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users. We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis testing, linear models, and analysis of variance. For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component. The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process. Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity.  相似文献   

15.
为了对细胞多光谱图像进行快速、准确的分割,首先探讨了光谱比值在细胞多光谱显微图像分割中的应用,然后提出了利用多光谱图像的光谱信息,并结合传统分割方法的一种新的细胞自动分割方法。该方法首先通过从扣除背底后的多光谱图像中选择两个波段图像进行光谱比值操作来生成一幅比率图像,然后对该图像进行自动多阈值分割、二值形态学操作,最终获得了细胞的胞浆和胞核覆盖层。该方法首次将光谱比值技术应用到细胞多光谱显微图像分割中,对骨髓细胞图像的自动分割实验表明,该方法具有分割准确、分割速度快、受外界干扰少的特点,该方法也可以推广到其他多光谱显微图像的分割中。  相似文献   

16.
针对气体定量分析中,支持向量机建模的参数难以确定以及现有的方法历时长等问题,提出了一种改进的网格搜索法,用于建立基于红外光谱的CO2气体定量分析模型。通过对汽车尾气中CO2气体的初始数据进行优化,再将优化的核函数代入支持向量机进行浓度的回归分析。对浓度范围在0.025%~20%的20组不同浓度的CO2气体进行定量分析,并与PSO算法作对比。实验表明,采用改进的网格搜索法获得的最佳参数c=0.25,g=2.8284,PSO获得的最佳参数c=18.3021,g=0.01,所用时间比PSO算法节省约5倍。预测结果误差在5%以内,符合国家对尾气排放的相关标准。  相似文献   

17.
王勇  李逸  王丽丽  朱晓燕 《计算机科学》2018,45(Z11):480-487
准确预测软件成本是软件工程领域最具挑战性的任务之一。软件开发固有的不确定性和风险性,使得仅仅在项目早期预测总成本是不够的,还需要在开发过程中持续预测各个阶段的成本,并根据变化趋势重新分配资源,以确保项目在规定的时间和预算内完成。由此,提出一种基于类推和灰色模型的软件阶段成本预测方法——AGSE(Analogy & Grey Model Based Software Stage Effort Estimation)。该杂交方法通过合并两种方法的预测值得到最终的预测结果,避免了单独使用其中一种方法预测时存在的局限性。在真实的软件项目数据集上的实验结果表明,AGSE的预测精度优于类推方法、GM(1,1)模型、GV方法、卡尔曼滤波和线性回归,显示出较大的潜力。  相似文献   

18.
In the area of software cost estimation, various methods have been proposed to predict the effort or the productivity of a software project. Although most of the proposed methods produce point estimates, in practice it is more realistic and useful for a method to provide interval predictions. In this paper, we explore the possibility of using such a method, known as ordinal regression to model the probability of correctly classifying a new project to a cost category. The proposed method is applied to three data sets and is validated with respect to its fitting and predictive accuracy.  相似文献   

19.
In literature a number of different methods are proposed to improve the prediction accuracy of grey models. However, most of them are computationally expensive, and this may prohibit their extensive use. This paper describes a much simpler scheme, based on the principle of concatenation, in which unit step predictions are concatenated by replacing the missing outputs by their previously predicted values. Despite its extreme simplicity, it is shown that the predicted values thus derived results in a better performance than the methods proposed in the literature. Simulation studies show the effectiveness of the proposed algorithm when applied to nonlinear function predictions.  相似文献   

20.
李晓华  邓伟 《计算机工程》2012,38(22):263-266
原有数据集成方法在基因调控网络构建中不能很好地利用数据之间的相关特性。为此,提出一种改进的数据集成方法。分别利用敲除数据和微扰数据进行预测,根据2种实验数据预测结果的重叠程度赋予不同的可信度,优先考虑重叠程度高的部分,按照可信度对预测结果进行排序。采用Dream3数据集与原有方法进行性能对比,实验结果表明,改进方法的总体性能比原有方法高出4.9%。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号