首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The adoption of functional size measurement (FSM) methods in software organizations is growing. In particular, special attention is being paid to the COSMIC method, because of its novelties against 1st generation FSM methods such as IFPUG FPA. One of the main problems facing organizations wanting to use COSMIC is how to properly convert the software functional size of the projects in their portfolio measured by the previously adopted FSM method to the size measured by the new method.The objective of this paper is to find a sound mathematical basis for converting an IFPUG measurement to a COSMIC measurement.In the light of previously published researches, parallel measurements were performed to establish three new datasets (respectively composed by 21, 14 and 35 data points) and verified by an expert measurer, certified on both techniques. In order to obtain a more precise solution, the search for a mathematical relationship has been run using new nonlinear equation types.Results from the analysis confirmed an approximated conversion factor of 1:1, within a range between 0.9 and 1.1, but moving from a larger number of data points analyzed then in past studies.These results can be very useful for those companies starting to use their benchmarking databases populated in IFPUG FP units to projects measured in COSMIC FP.  相似文献   

2.
Software development companies today are widely using software functional size measurement (FSM) as the main variable to assess the effort and time needed to perform a new software project. In the recent years, this has led to a grown interest in improving the way the measures are taken.In such sense, one of the main aspects that could have impact on measurements and that has not been enough studied is the error introduced by the measurer of the software application, through the subjectivity that can be introduced in the interpretation of the unit application rules. Such error could be evident in a measurement dispersion, defined in this paper in two possible ways: (a) Horizontal Dispersion, where the error could be introduced by the fact that two or more different people counted the same application at the same moment in the project development; and (b) Vertical Dispersion, where the error could be introduced by same measurer that count the same application at different times during the development.Since its definition by Albrecht in 1979 and its subsequent change of name in 1986, IFPUG function points have been the functional software measurement unit mostly applied, despite the definition and standardization of other variants such as NESMA, Mk-II, or more recently FiSMA. However in recent years a new method has been introduced called COSMIC that has been defined as a 2nd-generation FSM method, attracting the interest of the international software measurement community.The aim of this research is to draw some preliminary conclusions from statistical analysis of the software functional size data in which the horizontal dispersion degree could have been introduced in measurements taken into account IFPUG and COSMIC methods.  相似文献   

3.
The use of functional size measurement (FSM) methods in software development organizations is growing during the years. Also, object oriented (OO) techniques have become quite a standard to design the software and, in particular, Use Cases is one of the most used techniques to specify functional requirements. Main FSM methods do not include specific rules to measure the software functionality from its Use Cases analysis. To deal with this issue some other methods like Kramer's functional measurement method have been developed. Therefore, one of the main issues for those organizations willing to use OO functional measurement method in order to facilitate the use cases count procedure is how to convert their portfolio functional size from the previously adopted FSM method towards the new method. The objective of this research is to find a statistical relationship for converting the software functional size units measured by the International Function Point Users Group (IFPUG) function point analysis (FPA) method into Kramer-Smith's use cases points (UCP) method and vice versa. Methodologies for a correct data gathering are proposed and results obtained are analyzed to draw the linear and non-linear equations for this correlation. Finally, a conversion factor and corresponding conversion intervals are given to establish the statistical relationship.  相似文献   

4.
ContextThe COSMIC functional size measurement method on UML diagrams has been investigated as a means to estimate the software effort early in the software development life cycle. Like other functional size measurement methods, the COSMIC method takes into account the data movements in the UML sequence diagrams for example, but does not consider the data manipulations in the control structure. This paper explores software sizing at a finer level of granularity by taking into account the structural aspect of a sequence diagram in order to quantify its structural size. These functional and structural sizes can then be used as distinct independent variables to improve effort estimation models.ObjectiveThe objective is to design an improved measurement of the size of the UML sequence diagrams by taking into account the data manipulations represented by the structure of the sequence diagram, which will be referred to as their structural size.MethodWhile the design of COSMIC defines the functional size of a functional process at a high level of granularity (i.e. the data movements), the structural size of a sequence diagram is defined at a finer level of granularity: the size of the flow graph of their control structure described through the alt, opt and loop constructs. This new measurement method was designed by following the process recommended in Software Metrics and Software Metrology (Abran, 2010).ResultsThe size of sequence diagrams can now be measured from two perspectives, both functional and structural, and at different levels of granularity with distinct measurement units.ConclusionIt is now feasible to measure the size of functional requirements at two levels of granularity: at an abstract level, the software functional size can be measured in terms of COSMIC Function Point (CFP) units; and at a detailed level, the software structural size can be measured in terms of Control Structure Manipulation (CSM) units. These measures represent complementary aspects of software size and can be used as distinct independent variables to improve effort estimation models.  相似文献   

5.
BackgroundFunctional size measurement methods are increasingly being adopted by software organizations due to the benefits they provide to software project managers. The Function Point Analysis (FPA) measurement method has been used extensively and globally in software organizations. The COSMIC measurement method is considered a second generation FSM method, because of the novel aspects it brings to the FSM field. After the COSMIC method was proposed, the issue of convertibility from FPA to COSMIC method arose, the main problem being the ability to convert FPA historical data to the corresponding COSMIC Function Point (CFP) data with a high level of accuracy, which would give organizations the ability to use the data in their future planning. Almost all the convertibility studies found in the literature involve converting FPA measures to COSMIC measures statistically, based on the final size generated by both methods.ObjectivesThis paper has three main objectives. The first is to explore the accuracy of the conversion type that converts FPA measures to COSMIC measures statistically, and that of the type that converts FPA transaction function measures to COSMIC measures. The second is to propose a new conversion type that predicts the number of COSMIC data movements based on the number of file type references referenced by all the elementary processes in a single application. The third is to compare the accuracy of our proposed conversion type with the other two conversion types found in the literature.MethodOne dataset from the management information systems domain was used to compare the accuracy of all three conversion types using a systematic conversion approach that applies three regression models: Ordinary Least Squares, Robust Least Trimmed Squares, and logarithmic transformation were used. Four datasets from previous studies were used to evaluate the accuracy of the three conversion types, to which the Leave One Out Cross Validation technique was applied to obtain the measures of fitting accuracy.ResultsThe conversion type most often used as well as the conversion type based on transaction function size were found to generate nonlinear, inaccurate and invalid results according to measurement theory. In addition, they produce a loss of measurement information in the conversion process, because of the FPA weighting system and FPA structural problems, such as illegal scale transformation. Our proposed conversion type avoids the problems inherent in the other two types but not the nonlinearity problem. Furthermore, the proposed conversion type has been found to be more accurate than the other types when the COSMIC functional processes comprise dataset applications that are systematically larger than their corresponding FPA elementary processes, or when the processes vary from small to large. Finally, our proposed conversion type delivered better results over the tested datasets, whereas, in general, there is no statistical significant difference between the accuracy of the conversion types examined for every dataset, particularly the conversion type most often used is not the most accurate.ConclusionsOur proposed conversion type achieves accurate results over the tested datasets. However, the lack of knowledge needed to use it over all the datasets in the literature limits the value of this conclusion. Consequently, practitioners converting from FPA to COSMIC should not stay with only one conversion type, assuming that it is the best. In order to achieve a high level of accuracy in the conversion process, all three conversion types must be tested via a systematic conversion approach.  相似文献   

6.
ContextThere are two interrelated difficulties in requirements engineering processes. First, free-format modelling practices in requirements engineering activities may lead to low quality artefacts and productivity problems. Second, the COSMIC Function Point Method is not yet widespread in the software industry because applying measurement rules to imprecise and ambiguous textual requirements is difficult and requires additional human measurement effort. This challenge is common to all functional size measurement methods.ObjectiveIn this study, alternative solutions have been investigated to address these two difficulties. Information created during the requirements engineering process is formalized as an ontology that also becomes a convenient model for transforming requirements into COSMIC Function Point Method concepts.MethodA method is proposed to automatically measure the functional size of software by using the designed ontology. The proposed method has been implemented as a software application and verified with real projects conducted within the ICT department of a leading telecommunications provider in Turkey.ResultsWe demonstrated a novel method to measure the functional size of software in COSMIC FP automatically. It is based on a newly developed requirements engineering ontology. Our proposed method has several advantages over other methods explored in previous research.ConclusionManual and automated measurement results are in agreement, and the tool is promising for the company under study and for the industry at large.  相似文献   

7.
功能点分析法是一种使用非常广泛的软件规模估算方法。主要针对IFPUG(Intcrnational Function Point User Group)提出的功能点分析法在划分功能组件复杂度等级时所存在的不连续性问题,结合模糊理论和插值方法,提出了模糊一插值功能点分析法,解决了复杂度等级划分中出现的不连续、不精确等缺点和问题。实例证明,新方法不仅能更准确地估算出功能点数量,而且具有很强的实际可操作性。  相似文献   

8.
基于IFPUG功能点的简化度量方法   总被引:1,自引:0,他引:1  
软件规模度量是进行软件项目管理的重要依据,在项目早期阶段尤其具有重要意义。IFPUG标准功能点方法需要知道软件详细信息才能完成软件度量,且计算过程复杂,在工程早期应用限制较多。针对这些问题,提出了一种基于IFPUG的简化度量方法,将标准方法的5个对象简化为软件事务功能和内、外部数据功能3个对象,固定每个对象的加权因子,通过给出功能点值及其范围的方法,为简化度量结果提供可靠性参考依据,从而实现软件功能规模简化度量。该简化方法降低了标准功能点方法的使用难度,简化了度量步骤。通过实际项目验证,度量可靠性在60%以上,与其他简化方法相比,结果更加准确。  相似文献   

9.
Function Point (FP) is a useful software metric that was first proposed 25 years ago, since then, it has steadily evolved into a functional size metric consolidated in the well-accepted Standardized International Function Point Users Group (IFPUG) Counting Practices Manual – version 4.2. While software development industry has grown rapidly, the weight values assigned to count standard FP still remain same, which raise critical questions about the validity of the weight values. In this paper, we discuss the concepts of calibrating Function Point, whose aims are to estimate a more accurate software size that fits for specific software application, to reflect software industry trend, and to improve the cost estimation of software projects. A FP calibration model called Neuro-Fuzzy Function Point Calibration Model (NFFPCM) that integrates the learning ability from neural network and the ability to capture human knowledge from fuzzy logic is proposed. The empirical validation using International Software Benchmarking Standards Group (ISBSG) data repository release 8 shows a 22% accuracy improvement of mean magnitude relative error (MMRE) in software effort estimation after calibration.  相似文献   

10.
Size is a major and main parameter for the estimation of efforts and cost of software applications in general and mobile applications in particular and estimating effort, cost and time has been a key step in the life cycle of the software project. In order to create a sound schedule for the project, it is therefore important to have these estimates as soon as possible in the software development life cycle. In past years, many methods have been employed to estimate size and efforts of mobile applications but till now these methods do not meet the expected needs from customer. In this paper, we present a new size measurement method i.e., Mobile COSMIC Function Points (MCFP) based on the COSMIC approach, which is a primary factor for estimation of efforts in mobile application development. This paper analyzes the possibility of using a combination of Functional and Non-functional parameters including both Mobile Technical Complexity Factors (MTCF) and Mobile Environmental Complexity Factors (MECF) for the purpose of mobile application sizing prediction and hence effort estimation. For the purpose of this study, thirty six mobile applications were analyzed and their size and efforts were compared by applying the new effort estimation approach. In this context of a mobile application, few investigations have been performed to compare the effectiveness of COSMIC, FP's and the proposed approach “COSMIC Plus Effort Estimation Model (CPEEM)”. The main goal of this paper is to investigate if the inclusion of Non functional parameters imposes an effect on the functional size of mobile application development. Upon estimating efforts using the proposed approach, the results were promising for mobile applications when compared the results of our approach with the results of the other two approaches  相似文献   

11.
应用IFPUG功能点方法的军用软件规模估算   总被引:2,自引:0,他引:2  
IFPUG功能点方法是国际功能点用户协会提出的一种广泛使用的度量软件功能大小的方法,它不依赖于实现语言,度量出来的结果也可以在不同的开发过程之间进行比较,该文提供了使用该方法对军用软件系统进行规模估算的实施框架,开发者使用该方法在开发的早期能够估算出系统的规模,为后期工作量与成本估算提供可靠依据,在系统完成后可作为系统其他度量和评估的有效输入。  相似文献   

12.
ContextFunctional size measurement methods are widely used but have two major shortcomings: they require a complete and detailed knowledge of user requirements, and they involve relatively expensive and lengthy processes.ObjectiveUML is routinely used in the software industry to effectively describe software requirements in an incremental way, so UML models grow in detail and completeness through the requirements analysis phase. Here, we aim at defining the characteristics of increasingly more refined UML requirements models that support increasingly more sophisticated – hence presumably more accurate – size estimation processes.MethodWe consider the COSMIC method and three alternative processes (two of which are proposed in this paper) to estimate COSMIC size measures that can be applied to UML diagrams at progressive stages of the requirements definition phase. Then, we check the accuracy of the estimates by comparing the results obtained on a set of projects to the functional size values obtained with the standard COSMIC method.ResultsOur analysis shows that it is possible to write increasingly more detailed and complete UML models of user requirements that provide the data required by COSMIC size estimation methods, which in turn yield increasingly more accurate size measure estimates of the modeled software. Initial estimates are based on simple models and are obtained quickly and with little effort. The estimates increase their accuracy as models grow in completeness and detail, i.e., as the requirements definition phase progresses.ConclusionDevelopers that use UML for requirements modeling can obtain an early estimation of the application size at the beginning of the development process, when only a very simple UML model has been built for the application, and can obtain increasingly more accurate size estimates while the knowledge of the product increases and UML models are refined accordingly.  相似文献   

13.
A function point (FP) is a unit of measurement that expresses the degree of functionality that an information system provides to a user. Many software organizations use FPs to estimate the effort required for software development. However, it is essential that the definition of 1 FP be based on the software development experience of the organization. In the present study, we propose a method by which to automatically extract data and transaction functions from Web applications under several conditions using static analysis. The proposed method is based on the International Function Point Users Group (IFPUG) method and has been developed as an FP measurement tool. We applied the proposed method to several Web applications and examined the difference between FP counts obtained by the tool and those obtained by a certified FP specialist (CFPS). The results reveal that the numbers of data and transaction functions extracted by the tool is approximately the same as the numbers of data and transaction functions extracted by the specialist.  相似文献   

14.
Since the introduction of COSMIC Function Points, the problem of converting historical data measured using traditional Function Points into COSMIC measures have arisen. To this end, several researchers have investigated the possibility of identifying the relationship between the two measures by means of statistical methods. This paper aims at improving statistical convertibility of Function Points into COSMIC Function Points by improving previous work with respect to aspects—like outlier identification and exclusion, model non-linearity, applicability conditions, etc.—which up to now were not adequately considered, with the purpose of confirming, correcting or enhancing current models. Available datasets including software sizes measured both in Function Points and COSMIC Function Points were analyzed. The role of outliers was studied; non linear models and piecewise linear models were derived, in addition to linear models. Models based on transactions only were also derived. Confidence intervals were used throughout the paper to assess the values of the models’ parameters. The dependence of the ratio between Function Points and COSMIC Function Points on size was studied. The union of all the available datasets was also studied, to overcome problems due to the relatively small size of datasets. It is shown that outliers do affect the linear models, typically increasing the slope of the regression lines; however, this happens mostly in small datasets: in the union of the available datasets there is no outlier that can influence the model. Conditions for the applicability of the statistical conversion are identified, in terms of relationships that must hold among the basic functional components of Function Point measures. Non-linear models are shown to represent well the relationships between the two measures, since the ratio between COSMIC Function Points and Function Points appears to increase with size. In general, it is confirmed that convertibility can be modeled by different types of models. This is a problem for practitioners, who have to choose one of these models. Anyway, a few practical suggestions can be derived from the results reported here. The model assuming that one FP is equal to one CFP causes the biggest conversion errors observed and is not generally supported. All the considered datasets are characterized by a ratio of transaction to data functions that is fairly constant throughout each dataset: this can be regarded as a condition for the applicability of current models; under this condition non-linear (log–log) models perform reasonably well. The fact that in Function Point Analysis the size of a process is limited, while it is not so in the COSMIC method, seems to be the cause of non linearity of the relationship between the two measures. In general, it appears that the conversion can be successfully based on transaction functions alone, without losing in precision.  相似文献   

15.
Jones  C. 《Computer》1995,28(11):87-88
The availability of empirical data from projects that use both function-point and lines-of-code metrics has led to a useful technique called backfiring. Backfiring is the direct mathematical conversion of LOC data into equivalent function-point data. Because the backfiring equations are bidirectional, they also provide a powerful way of sizing, or predicting, source-code volume for any known programming language or combination of languages. The function-point metric, invented by A.J. Albrecht of IBM in the middle 1970s, is a synthetic metric derived by a weighted formula that includes five elements: inputs, outputs, logical files, inquiries, and interfaces. IBM put it into the public domain in 1979, and its use spread rapidly, particularly after the formation of the International Function Point Users Group (IFPUG) in the mid-1980s. By then, hundreds of software projects had been measured using both function points and lines of source code. Since an application's function-point total is independent of the source code, this dual analysis has led to several important discoveries  相似文献   

16.

Background

COSMIC Function Points and traditional Function Points (i.e., IFPUG Function Points and more recent variation of Function Points, such as NESMA and FISMA) are probably the best known and most widely used Functional Size Measurement methods. The relationship between the two kinds of Function Points still needs to be investigated. If traditional Function Points could be accurately converted into COSMIC Function Points and vice versa, then, by measuring one kind of Function Points, one would be able to obtain the other kind of Function Points, and one might measure one or the other kind interchangeably. Several studies have been performed to evaluate whether a correlation or a conversion function between the two measures exists. Specifically, it has been suggested that the relationship between traditional Function Points and COSMIC Function Points may not be linear, i.e., the value of COSMIC Function Points seems to increase more than proportionally to an increase of traditional Function Points.

Objective

This paper aims at verifying this hypothesis using available datasets that collect both FP and CFP size measures.

Method

Rigorous statistical analysis techniques are used, specifically Piecewise Linear Regression, whose applicability conditions are systematically checked. The Piecewise Linear Regression curve is a series of interconnected segments. In this paper, we focused on Piecewise Linear Regression curves composed of two segments. We also used Linear and Parabolic Regressions, to check if and to what extent Piecewise Linear Regression may provide an advantage over other regression techniques. We used two categories of regression techniques: Ordinary Least Squares regression is based on the usual minimization of the sum of squares of the residuals, or, equivalently, on the minimization of the average squared residual; Least Median of Squares regression is a robust regression technique that is based on the minimization of the median squared residual. Using a robust regression technique helps filter out the excessive influence of outliers.

Results

It appears that the analysis of the relationship between traditional Function Points and COSMIC Function Points based on the aforementioned data analysis techniques yields valid significant models. However, different results for the various available datasets are achieved. In practice, we obtained statistically valid linear, piecewise linear, and non-linear conversion formulas for several datasets. In general, none of these is better than the others in a statistically significant manner.

Conclusions

Practitioners interested in the conversion of FP measures into CFP (or vice versa) cannot just pick a conversion model and be sure that it will yield the best results. All the regression models we tested provide good results with some datasets. In practice, all the models described in the paper - in particular, both linear and non-linear ones - should be evaluated in order to identify the ones that are best suited for the specific dataset at hand.  相似文献   

17.
Conceptual Association of Functional Size Measurement Methods   总被引:1,自引:0,他引:1  
《Software, IEEE》2009,26(3):71-78
Functional size determines how much functionality software provides by measuring the aggregate amount of its cohesive execution sequences. Alan Albrecht first introduced the concept in 1979. Since he originally described the function point analysis (FPA) method, researchers and practitioners have developed variations of functional size metrics and methods. The authors discuss the conceptual similarities and differences between functional size measurement methods and introduce a model for unification.  相似文献   

18.
In this paper we propose a framework for validating software measurement. We start by defining a measurement structure model that identifies the elementary component of measures and the measurement process, and then consider five other models involved in measurement: unit definition models, instrumentation models, attribute relationship models, measurement protocols and entity population models. We consider a number of measures from the viewpoint of our measurement validation framework and identify a number of shortcomings; in particular we identify a number of problems with the construction of function points. We also compare our view of measurement validation with ideas presented by other researchers and identify a number of areas of disagreement. Finally, we suggest several rules that practitioners and researchers can use to avoid measurement problems, including the use of measurement vectors rather than artificially contrived scalars  相似文献   

19.
We present an empirical validation of object-oriented size estimation models. In previous work we proposed object oriented function points (OOFP), an adaptation of the function points approach to object-oriented systems. In a small pilot study, we used the OOFP method to estimate lines of code (LOC). In this paper we extend the empirical validation of OOFP substantially, using a larger data set and comparing OOFP with alternative predictors of LOC. The aim of the paper is to gain an understanding of which factors contribute to accurate size prediction for OO software, and to position OOFP within that knowledge. A cross validation approach was adopted to build and evaluate linear models where the independent variable was either a traditional OO entity (classes, methods, association, inheritance, or a combination of them) or an OOFP-related measure. Using the full OOFP process, the best size predictor achieved a normalized mean squared error of 38%. By removing function point weighting tables from the OOFP process, and carefully analyzing collected data points and developer practices, we identified several factors that influence size estimation. Our empirical evidence demonstrates that by controlling these factors size estimates could be substantially improved, decreasing the normalized mean squared error to 15%—in relative terms, a 56% reduction.  相似文献   

20.
Abstract: Function points have become an accepted measure of software size and are becoming an industry standard. However, the application of function point analysis is fairly complex and requires experience and a good understanding to apply it in a consistent manner. This paper describes the development of a knowledge-based, object-oriented system to assist an analyst in performing function point analysis. The objective of the function point analysis (FPA) tool is to allow an analyst to estimate system size in function points without having extensive training or experience using the function point method. The FPA tool uses information available in a functional specification that is a product of the requirements analysis phase of the software development life cycle. An object-oriented model was used to represent the functional requirements of a software system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号