首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ContextFunctional size measurement methods are widely used but have two major shortcomings: they require a complete and detailed knowledge of user requirements, and they involve relatively expensive and lengthy processes.ObjectiveUML is routinely used in the software industry to effectively describe software requirements in an incremental way, so UML models grow in detail and completeness through the requirements analysis phase. Here, we aim at defining the characteristics of increasingly more refined UML requirements models that support increasingly more sophisticated – hence presumably more accurate – size estimation processes.MethodWe consider the COSMIC method and three alternative processes (two of which are proposed in this paper) to estimate COSMIC size measures that can be applied to UML diagrams at progressive stages of the requirements definition phase. Then, we check the accuracy of the estimates by comparing the results obtained on a set of projects to the functional size values obtained with the standard COSMIC method.ResultsOur analysis shows that it is possible to write increasingly more detailed and complete UML models of user requirements that provide the data required by COSMIC size estimation methods, which in turn yield increasingly more accurate size measure estimates of the modeled software. Initial estimates are based on simple models and are obtained quickly and with little effort. The estimates increase their accuracy as models grow in completeness and detail, i.e., as the requirements definition phase progresses.ConclusionDevelopers that use UML for requirements modeling can obtain an early estimation of the application size at the beginning of the development process, when only a very simple UML model has been built for the application, and can obtain increasingly more accurate size estimates while the knowledge of the product increases and UML models are refined accordingly.  相似文献   

2.
ContextThere are two interrelated difficulties in requirements engineering processes. First, free-format modelling practices in requirements engineering activities may lead to low quality artefacts and productivity problems. Second, the COSMIC Function Point Method is not yet widespread in the software industry because applying measurement rules to imprecise and ambiguous textual requirements is difficult and requires additional human measurement effort. This challenge is common to all functional size measurement methods.ObjectiveIn this study, alternative solutions have been investigated to address these two difficulties. Information created during the requirements engineering process is formalized as an ontology that also becomes a convenient model for transforming requirements into COSMIC Function Point Method concepts.MethodA method is proposed to automatically measure the functional size of software by using the designed ontology. The proposed method has been implemented as a software application and verified with real projects conducted within the ICT department of a leading telecommunications provider in Turkey.ResultsWe demonstrated a novel method to measure the functional size of software in COSMIC FP automatically. It is based on a newly developed requirements engineering ontology. Our proposed method has several advantages over other methods explored in previous research.ConclusionManual and automated measurement results are in agreement, and the tool is promising for the company under study and for the industry at large.  相似文献   

3.
ContextThe COSMIC functional size measurement method on UML diagrams has been investigated as a means to estimate the software effort early in the software development life cycle. Like other functional size measurement methods, the COSMIC method takes into account the data movements in the UML sequence diagrams for example, but does not consider the data manipulations in the control structure. This paper explores software sizing at a finer level of granularity by taking into account the structural aspect of a sequence diagram in order to quantify its structural size. These functional and structural sizes can then be used as distinct independent variables to improve effort estimation models.ObjectiveThe objective is to design an improved measurement of the size of the UML sequence diagrams by taking into account the data manipulations represented by the structure of the sequence diagram, which will be referred to as their structural size.MethodWhile the design of COSMIC defines the functional size of a functional process at a high level of granularity (i.e. the data movements), the structural size of a sequence diagram is defined at a finer level of granularity: the size of the flow graph of their control structure described through the alt, opt and loop constructs. This new measurement method was designed by following the process recommended in Software Metrics and Software Metrology (Abran, 2010).ResultsThe size of sequence diagrams can now be measured from two perspectives, both functional and structural, and at different levels of granularity with distinct measurement units.ConclusionIt is now feasible to measure the size of functional requirements at two levels of granularity: at an abstract level, the software functional size can be measured in terms of COSMIC Function Point (CFP) units; and at a detailed level, the software structural size can be measured in terms of Control Structure Manipulation (CSM) units. These measures represent complementary aspects of software size and can be used as distinct independent variables to improve effort estimation models.  相似文献   

4.
Since the introduction of COSMIC Function Points, the problem of converting historical data measured using traditional Function Points into COSMIC measures have arisen. To this end, several researchers have investigated the possibility of identifying the relationship between the two measures by means of statistical methods. This paper aims at improving statistical convertibility of Function Points into COSMIC Function Points by improving previous work with respect to aspects—like outlier identification and exclusion, model non-linearity, applicability conditions, etc.—which up to now were not adequately considered, with the purpose of confirming, correcting or enhancing current models. Available datasets including software sizes measured both in Function Points and COSMIC Function Points were analyzed. The role of outliers was studied; non linear models and piecewise linear models were derived, in addition to linear models. Models based on transactions only were also derived. Confidence intervals were used throughout the paper to assess the values of the models’ parameters. The dependence of the ratio between Function Points and COSMIC Function Points on size was studied. The union of all the available datasets was also studied, to overcome problems due to the relatively small size of datasets. It is shown that outliers do affect the linear models, typically increasing the slope of the regression lines; however, this happens mostly in small datasets: in the union of the available datasets there is no outlier that can influence the model. Conditions for the applicability of the statistical conversion are identified, in terms of relationships that must hold among the basic functional components of Function Point measures. Non-linear models are shown to represent well the relationships between the two measures, since the ratio between COSMIC Function Points and Function Points appears to increase with size. In general, it is confirmed that convertibility can be modeled by different types of models. This is a problem for practitioners, who have to choose one of these models. Anyway, a few practical suggestions can be derived from the results reported here. The model assuming that one FP is equal to one CFP causes the biggest conversion errors observed and is not generally supported. All the considered datasets are characterized by a ratio of transaction to data functions that is fairly constant throughout each dataset: this can be regarded as a condition for the applicability of current models; under this condition non-linear (log–log) models perform reasonably well. The fact that in Function Point Analysis the size of a process is limited, while it is not so in the COSMIC method, seems to be the cause of non linearity of the relationship between the two measures. In general, it appears that the conversion can be successfully based on transaction functions alone, without losing in precision.  相似文献   

5.
Functional Size Measurement (FSM) methods are intended to measure the size of software by quantifying the functional user requirements of the software. The capability to accurately quantify the size of software in an early stage of the development lifecycle is critical to software project managers for evaluating risks, developing project estimates and having early project indicators. In this paper, we present OO-Method Function Points (OOmFP), which is a new FSM method for object-oriented systems that is based on measuring conceptual schemas. OOmFP is presented following the steps of a process model for software measurement. Using this process model, we present the design of the measurement method, its application in a case study, and the analysis of different evaluation types that can be carried out to validate the method and to verify its application and results.  相似文献   

6.
BackgroundFunctional size measurement methods are increasingly being adopted by software organizations due to the benefits they provide to software project managers. The Function Point Analysis (FPA) measurement method has been used extensively and globally in software organizations. The COSMIC measurement method is considered a second generation FSM method, because of the novel aspects it brings to the FSM field. After the COSMIC method was proposed, the issue of convertibility from FPA to COSMIC method arose, the main problem being the ability to convert FPA historical data to the corresponding COSMIC Function Point (CFP) data with a high level of accuracy, which would give organizations the ability to use the data in their future planning. Almost all the convertibility studies found in the literature involve converting FPA measures to COSMIC measures statistically, based on the final size generated by both methods.ObjectivesThis paper has three main objectives. The first is to explore the accuracy of the conversion type that converts FPA measures to COSMIC measures statistically, and that of the type that converts FPA transaction function measures to COSMIC measures. The second is to propose a new conversion type that predicts the number of COSMIC data movements based on the number of file type references referenced by all the elementary processes in a single application. The third is to compare the accuracy of our proposed conversion type with the other two conversion types found in the literature.MethodOne dataset from the management information systems domain was used to compare the accuracy of all three conversion types using a systematic conversion approach that applies three regression models: Ordinary Least Squares, Robust Least Trimmed Squares, and logarithmic transformation were used. Four datasets from previous studies were used to evaluate the accuracy of the three conversion types, to which the Leave One Out Cross Validation technique was applied to obtain the measures of fitting accuracy.ResultsThe conversion type most often used as well as the conversion type based on transaction function size were found to generate nonlinear, inaccurate and invalid results according to measurement theory. In addition, they produce a loss of measurement information in the conversion process, because of the FPA weighting system and FPA structural problems, such as illegal scale transformation. Our proposed conversion type avoids the problems inherent in the other two types but not the nonlinearity problem. Furthermore, the proposed conversion type has been found to be more accurate than the other types when the COSMIC functional processes comprise dataset applications that are systematically larger than their corresponding FPA elementary processes, or when the processes vary from small to large. Finally, our proposed conversion type delivered better results over the tested datasets, whereas, in general, there is no statistical significant difference between the accuracy of the conversion types examined for every dataset, particularly the conversion type most often used is not the most accurate.ConclusionsOur proposed conversion type achieves accurate results over the tested datasets. However, the lack of knowledge needed to use it over all the datasets in the literature limits the value of this conclusion. Consequently, practitioners converting from FPA to COSMIC should not stay with only one conversion type, assuming that it is the best. In order to achieve a high level of accuracy in the conversion process, all three conversion types must be tested via a systematic conversion approach.  相似文献   

7.
Function Point (FP) is a useful software metric that was first proposed 25 years ago, since then, it has steadily evolved into a functional size metric consolidated in the well-accepted Standardized International Function Point Users Group (IFPUG) Counting Practices Manual – version 4.2. While software development industry has grown rapidly, the weight values assigned to count standard FP still remain same, which raise critical questions about the validity of the weight values. In this paper, we discuss the concepts of calibrating Function Point, whose aims are to estimate a more accurate software size that fits for specific software application, to reflect software industry trend, and to improve the cost estimation of software projects. A FP calibration model called Neuro-Fuzzy Function Point Calibration Model (NFFPCM) that integrates the learning ability from neural network and the ability to capture human knowledge from fuzzy logic is proposed. The empirical validation using International Software Benchmarking Standards Group (ISBSG) data repository release 8 shows a 22% accuracy improvement of mean magnitude relative error (MMRE) in software effort estimation after calibration.  相似文献   

8.
针对功能点分析(FPA)方法因缺少精确化定义而导致度量结果与实际之间有一定偏差的问题,基于B方法对FPA的度量规则进行形式化定义,即为功能点计算提供一个明确的定义。实例应用表明,把B方法应用到软件度量中,能够提高软件项目管理的效率,为软件功能规模的自动化度量奠定基础。  相似文献   

9.
软件工作量估算是软件项目管理的重要组成部分之一,功能点度量方法逐渐成为该领域的主要方法.本文从UML部件(用例图、类图和顺序图)中识别出功能点度量方法中的事务功能和数据功能,分析其复杂度,最终得出功能点数,进而将功能点度量方法与UML建模技术相结合,实现了UML部件(用例图、类图和顺序图)向功能点的映射,并结合IFPUG的功能点估算步骤,提出了基于UML建模技术的功能点分析步骤.实证结果表明该方法可以进一步精化基于UML建模软件项目的工作量度量结果,便于项目管理人员控制软件项目活动,合理安排人员等资源,可以在一定程度上解决软件项目频繁超支和超时的问题.  相似文献   

10.
软件功能柔性定量分析   总被引:6,自引:0,他引:6  
通过对软件的形态分析,从力学的角度给出了软件功能柔性的定义,分析了影响软件功能柔性的因素.提出一个以柔力,柔距和柔点为核心要素的定量地度量软件功能柔性的分层模型.该模型采用功能点计数法度量柔距,通过对柔点分级确定柔力的值,解决了柔距和柔力的定量度量问题,能定量地度量软件功能柔性;并结合实例描述了度量过程,分析了度量结果.最后指出了度量存在的问题及下一步研究的目标.  相似文献   

11.

Background

COSMIC Function Points and traditional Function Points (i.e., IFPUG Function Points and more recent variation of Function Points, such as NESMA and FISMA) are probably the best known and most widely used Functional Size Measurement methods. The relationship between the two kinds of Function Points still needs to be investigated. If traditional Function Points could be accurately converted into COSMIC Function Points and vice versa, then, by measuring one kind of Function Points, one would be able to obtain the other kind of Function Points, and one might measure one or the other kind interchangeably. Several studies have been performed to evaluate whether a correlation or a conversion function between the two measures exists. Specifically, it has been suggested that the relationship between traditional Function Points and COSMIC Function Points may not be linear, i.e., the value of COSMIC Function Points seems to increase more than proportionally to an increase of traditional Function Points.

Objective

This paper aims at verifying this hypothesis using available datasets that collect both FP and CFP size measures.

Method

Rigorous statistical analysis techniques are used, specifically Piecewise Linear Regression, whose applicability conditions are systematically checked. The Piecewise Linear Regression curve is a series of interconnected segments. In this paper, we focused on Piecewise Linear Regression curves composed of two segments. We also used Linear and Parabolic Regressions, to check if and to what extent Piecewise Linear Regression may provide an advantage over other regression techniques. We used two categories of regression techniques: Ordinary Least Squares regression is based on the usual minimization of the sum of squares of the residuals, or, equivalently, on the minimization of the average squared residual; Least Median of Squares regression is a robust regression technique that is based on the minimization of the median squared residual. Using a robust regression technique helps filter out the excessive influence of outliers.

Results

It appears that the analysis of the relationship between traditional Function Points and COSMIC Function Points based on the aforementioned data analysis techniques yields valid significant models. However, different results for the various available datasets are achieved. In practice, we obtained statistically valid linear, piecewise linear, and non-linear conversion formulas for several datasets. In general, none of these is better than the others in a statistically significant manner.

Conclusions

Practitioners interested in the conversion of FP measures into CFP (or vice versa) cannot just pick a conversion model and be sure that it will yield the best results. All the regression models we tested provide good results with some datasets. In practice, all the models described in the paper - in particular, both linear and non-linear ones - should be evaluated in order to identify the ones that are best suited for the specific dataset at hand.  相似文献   

12.
This paper presents an empirical study that evaluates OO-Method Function Points (OOmFP), a functional size measurement procedure for object-oriented systems that are specified using the OO-Method approach. A laboratory experiment with students was conducted to compare OOmFP with the IFPUG – Function Point Analysis (FPA) procedure on a range of variables, including efficiency, reproducibility, accuracy, perceived ease of use, perceived usefulness and intention to use. The results show that OOmFP is more time-consuming than FPA but the measurement results are more reproducible and accurate. The results also indicate that OOmFP is perceived to be more useful and more likely to be adopted in practice than FPA in the context of OO-Method systems development. We also report lessons learned and suggest improvements to the experimental procedure employed and replications of this study using samples of industry practitioners.  相似文献   

13.
顾勋梅  虞慧群 《计算机应用》2009,29(11):3107-3109
功能规模度量(FSM)方法通过量化用户功能需求(FUR)而得到软件功能规模。针对不同的功能规模度量方法都是使用不同的抽象来描述一个软件系统的问题,提出了一种通用的FSM模型。根据软件系统的抽象模型,首先对度量所涉及的数据组和事务进行了泛化,然后以IFPUG FPA为例详细说明了该通用模型和FPA之间的转换过程,最后给出了度量过程的算法描述。  相似文献   

14.
面向对象方法的COSMIC-FFP功能规模度量   总被引:1,自引:0,他引:1  
现有的4种符合ISO标准的FSM方法均不能考虑到对象的交互性和对象的行为,无法正确地度量面向对象系统的功能规模。在分析面向对象方法的软件开发过程的基础上,结合面向对象系统的特点,基于COSMIC-FFP,提出了一种面向对象方法的全功能点度量方法,给出了该方法的映射规则和度量规则,并结合实例分析了其应用过程,为正确度量面向对象系统的功能规模提供了一条有效的途径。  相似文献   

15.
基于Web应用的全面功能点的改进   总被引:1,自引:0,他引:1  
顾勋梅  虞慧群 《计算机应用》2008,28(12):3098-3101
全面功能点(FFP)是一种应用广泛且使用方便的软件功能规模度量(FSM)方法,但其方法本身只能度量系统静态的方面,不能考虑到对象的交互性和对象的行为。根据COSMIC-FFP的度量元素和Web应用的结构,对COSMIC-FFP软件模型进行了改进,列出了针对Web应用的度量规则,并给出了实例说明了这些规则的使用。  相似文献   

16.
17.
Size is a major and main parameter for the estimation of efforts and cost of software applications in general and mobile applications in particular and estimating effort, cost and time has been a key step in the life cycle of the software project. In order to create a sound schedule for the project, it is therefore important to have these estimates as soon as possible in the software development life cycle. In past years, many methods have been employed to estimate size and efforts of mobile applications but till now these methods do not meet the expected needs from customer. In this paper, we present a new size measurement method i.e., Mobile COSMIC Function Points (MCFP) based on the COSMIC approach, which is a primary factor for estimation of efforts in mobile application development. This paper analyzes the possibility of using a combination of Functional and Non-functional parameters including both Mobile Technical Complexity Factors (MTCF) and Mobile Environmental Complexity Factors (MECF) for the purpose of mobile application sizing prediction and hence effort estimation. For the purpose of this study, thirty six mobile applications were analyzed and their size and efforts were compared by applying the new effort estimation approach. In this context of a mobile application, few investigations have been performed to compare the effectiveness of COSMIC, FP's and the proposed approach “COSMIC Plus Effort Estimation Model (CPEEM)”. The main goal of this paper is to investigate if the inclusion of Non functional parameters imposes an effect on the functional size of mobile application development. Upon estimating efforts using the proposed approach, the results were promising for mobile applications when compared the results of our approach with the results of the other two approaches  相似文献   

18.
A function point (FP) is a unit of measurement that expresses the degree of functionality that an information system provides to a user. Many software organizations use FPs to estimate the effort required for software development. However, it is essential that the definition of 1 FP be based on the software development experience of the organization. In the present study, we propose a method by which to automatically extract data and transaction functions from Web applications under several conditions using static analysis. The proposed method is based on the International Function Point Users Group (IFPUG) method and has been developed as an FP measurement tool. We applied the proposed method to several Web applications and examined the difference between FP counts obtained by the tool and those obtained by a certified FP specialist (CFPS). The results reveal that the numbers of data and transaction functions extracted by the tool is approximately the same as the numbers of data and transaction functions extracted by the specialist.  相似文献   

19.
Software effort estimation is an important but difficult task. Existing algorithmic models often fail to predict effort accurately and consistently. To address this, we developed a computational approach to software effort estimation. cEstor is a case-based reasoning engine developed from an analysis of expert reasoning. cEstor's architecture explicitly separates case-independent productivity adaptation knowledge (rules) from case-specific representations of prior projects encountered (cases). Using new data from actual projects, uncalibrated cEstor generated estimates which compare favorably to those of the referent expert, calibrated Function Points and calibrated COCOMO. The estimates were better than those produced by uncalibrated Basic COCOMO and Intermediate COCOMO. The roles of specific knowledge components in cEstor (cases, adaptation rules, and retrieval heuristics) were also examined. The results indicate that case-independent productivity adaptation rules affect the consistency of estimates and appropriate case selection affects the accuracy of estimates, but the combination of an adaptation rule set and unrestricted case base can yield the best estimates. Retrieval heuristics based on source lines of code and a Function Count heuristic based on summing over differences in parameter values, were found to be equivalent in accuracy and consistency, and both performed better than a heuristic based on Function Count totals.  相似文献   

20.
Current software cost estimation models, such as the 1981 Constructive Cost Model (COCOMO) for software cost estimation and its 1987 Ada COCOMO update, have been experiencing increasing difficulties in estimating the costs of software developed to new life cycle processes and capabilities. These include non-sequential and rapid-development process models; reuse-driven approaches involving commercial off-the-shelf (COTS) packages, re-engineering, applications composition, and applications generation capabilities; object-oriented approaches supported by distributed middleware; and software process maturity initiatives. This paper summarizes research in deriving a baseline COCOMO 2.0 model tailored to these new forms of software development, including rationale for the model decisions. The major new modeling capabilities of COCOMO 2.0 are a tailorable family of software sizing models, involving Object Points, Function Points, and Source Lines of Code; nonlinear models for software reuse and re-engineering; an exponentdriver approach for modeling relative software diseconomies of scale; and several additions, deletions and updates to previous COCOMO effort-multiplier cost drivers. This model is serving as a framework for an extensive current data collection and analysis effort to further refine and calibrate the model's estimation capabilities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号