首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Measuring programmer productivity and estimating programming time and costs are among the most worrisome and persistent problems facing the programming manager. A key element in both problem areas is program complexity. It has been demonstrated in practice that a measure of program complexity is an indispensable aid in evaluating a programming effort. The purpose of this paper is to present a prototype for a composite measure of program complexity. The paper presents a basis for a technique upon which an objective quantitative evaluation for any program or programming effort could be made.This index of complexity would give the manager a tool for a quantitative assessment of programming efforts so that judgments about the relative merits of programs and programmers can be based on objective data and an objective measure. The measure is applied against a reference group of COBOL programs, several of which were written in a structured programming environment. The index of complexity and the data from which it is derived are used to evaluate the complexity of structured vs unstructured COBOL programming styles.  相似文献   

2.
ContextEffort-aware models, e.g., effort-aware bug prediction models aim to help practitioners identify and prioritize buggy software locations according to the effort involved with fixing the bugs. Since the effort of current bugs is not yet known and the effort of past bugs is typically not explicitly recorded, effort-aware bug prediction models are forced to use approximations, such as the number of lines of code (LOC) of the predicted files.ObjectiveAlthough the choice of these approximations is critical for the performance of the prediction models, there is no empirical evidence on whether LOC is actually a good approximation. Therefore, in this paper, we investigate the question: is LOC a good measure of effort for use in effort-aware models?MethodWe perform an empirical study on four open source projects, for which we obtain explicitly-recorded effort data, and compare the use of LOC to various complexity, size and churn metrics as measures of effort.ResultsWe find that using a combination of complexity, size and churn metrics are a better measure of effort than using LOC alone. Furthermore, we examine the impact of our findings on previous effort-aware bug prediction work and find that using LOC as a measure for effort does not significantly affect the list of files being flagged, however, using LOC under-estimates the amount of effort required compared to our best effort predictor by approximately 66%.ConclusionStudies using effort-aware models should not assume that LOC is a good measure of effort. For the case of effort-aware bug prediction, using LOC provides results that are similar to combining complexity, churn, size and LOC as a proxy for effort when prioritizing the most risky files. However, we find that for the purpose of effort-estimation, using LOC may under-estimate the amount of effort required.  相似文献   

3.
Two types of models can assist the information system manager in gaining greater insight into the system development process. They are: isomorphic models that represent cause-effect relationships between certain conditions (e.g., structured techniques) and certain observable states (e.g., productivity change); and paramorphic models that describe an outcome but do not describe the processes or variables that influence the outcome (e.g., estimation of project time or cost). The two models are shown to be interrelated since the relationships of the first model are determinants of the parameters of the second model.IS managers can make significant contributions by developing isomorphic models tailored to their own organizations. However, metrics that measure relevant characteristics of programs and systems are required before substantial progress can be made. Although some initial attempts have been made to develop metris for program quality, program complexity, and programmer skill, much more work remains to be done. In addition, other metries must be developed that will require the involvement of personnel, not only in the computer sciences, but also in information systems, the behavioral sciences, and IS management.  相似文献   

4.
A finite automaton with multiplication (FAM) is a finite automaton with a register which is capable of holding any positive rational number. The register can be multiplied by any of a fixed number of rationals and can be tested for value 1. Closure properties and decision problems for various types of FAM's (e.g. two-way, one-way, nondeterministic, deterministic) are investigated. In particular, it is shown that the languages recognized by two-way deterministic FAM's are of tape complexity log n and time complexity n3. Some decision problems related to vector addition systems are also studied.  相似文献   

5.
6.
Complexity measures and provable recursive functions (p-functions) are combined to define a p-measure as a measure for which Blum's axioms can be proved in a given axiomatic system. For p-measures, it is shown that the complexity class of a p-function contains only p-functions and that all p-functions form a single complexity class. Various other classes and a variation of a complexity measure, all suggested by the notion of provability, are also investigated. Classical results in complexity theory remain true when relativized to p-functions.  相似文献   

7.
The increasing cost of software maintenance has resulted in greater emphasis on the production of maintainable software. One method used to enhance the development of maintainable software is to employ complexity metrics as a technique for controlling software complexity. In order to control complexity, it is imperative to plan for increases in complexity levels from code generation to code implementation. This paper presents a study of complexity increases during the testing and debugging phases of the software life cycle. The metrics used to measure complexity are lines of code, unique operators, unique operands, data difficulty, Halstead's effort and cyclomatic complexity.  相似文献   

8.
Lines of code metrics are routinely used as measures of software system complexity, programmer productivity, and defect density, and are used to predict both effort and cost. The guidelines for using a direct metric, such as lines of code, as a proxy for a quality factor such as complexity or defect density, or in derived metrics such as cost and effort are clear. Amongst other criteria, the direct metric must be linearly related to, and accurately predict, the quality factor and these must be validated through statistical analysis following a rigorous validation methodology. In this paper, we conduct such an analysis to determine the validity and utility of lines of code as a measure using the ISBGS-10 data set. We find that it fails to meet the specified validity tests and, therefore, has limited utility in derived measures.  相似文献   

9.
周海玲  孙涌 《微机发展》2006,16(2):23-25
所有成功的软件组织都将度量作为保证自己管理和技术质量的重要手段,软件成本估计则是软件度量[1,2]的核心任务。为了提高成本估算的准确性,文中根据特定软件企业中的历史项目数据对基本COCOMO模型进行校准,在具体的参数修正方法上利用对数数据相关算法进行校正,并与其它方法进行了比较,得到了满意的结果。校准后的模型对项目开发成本的预测将会更加准确,从而切实体现COCOMO成本度量工作对于软件项目的指导价值。因此,文中所做的成本估算模型的校准工作,对软件开发企业非常具有实用价值。  相似文献   

10.
Sun‐Jen Huang  Richard Lai 《Software》2002,32(12):1129-1154
An obstacle to the uses of software metrics and size models, which we have developed for measuring the complexity and maintainability of a communication protocol specified in Estelle and for estimating the size of its specification and implementation, is the time‐consuming effort in collecting the metrics. To address this problem, a software system called PSAMS (protocol specification assessment and measurement system) for automatically calculating the metrics and sizes of specification and implementation has been developed. This paper describes the design of PSAMS, which provides five functionalities for a communication protocol Estelle specification: exploring its specification, measuring its complexity, assessing its maintainability, estimating its specification size and estimating its implementation size. To demonstrate the usefulness of PSAMS, we have applied it to measure the complexity and maintainability of 10 communication protocol Estelle specifications; the measurement results and decision support information provided by each functionality are presented in this paper. With PSAMS, communication protocol designers and developers are able to assess the complexity of a communication protocol early in the specification stage and have information which helps them manage a communication software project better. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

11.
This paper reports the results of an empirical investigation of the relationships between effort expended, time scales, and project size for software project development. The observed relationships were compared with those predicted by Lawrence Putnam's Rayleigh curve model and Barry Boehm's COCOMO model. The results suggested that although the form of the basic empirical relationships were consistent with the cost models, the COCOMO model was a poor estimator of cost for the current data set and the data did not follow the Rayleigh curve suggested by Putnam. However, the results did suggest that it was possible to develop cost models tailored to a particular environment and to improve the precision of the models as they are used during the development cycle by including additional information such as the known effort for the early development phases. The paper finishes by discussing some of the problems involved in developing useful cost models.  相似文献   

12.
面向对象程序复杂性度量层次模型   总被引:2,自引:2,他引:2  
程序复杂性度量可以实现定量地分析程序复杂性,从而为估计成本提供一个标准。为了度量面向对象程序复杂性,在讨论了度量的定义.度量的理论基础后,提出了一种面向对象程序复杂性度量的层次模型。该模型分成5个层次:系统层、类簇层、类继承树层、类层和方法层,每个层次都有自己的度量方法。采用这种分层度量模型的好处是:它是一个框架,各度量方法归属到各层中;各层之间相互独立;一层中方法的修改并不影响其它层。  相似文献   

13.
Conklin  Darrell  Witten  Ian H. 《Machine Learning》1994,16(3):203-225
A central problem in inductive logic programming is theory evaluation. Without some sort of preference criterion, any two theories that explain a set of examples are equally acceptable. This paper presents a scheme for evaluating alternative inductive theories based on an objective preference criterion. It strives to extract maximal redundancy from examples, transforming structure into randomness. A major strength of the method is its application to learning problems where negative examples of concepts are scarce or unavailable. A new measure called model complexity is introduced, and its use is illustrated and compared with a proof complexity measure on relational learning tasks. The complementarity of model and proof complexity parallels that of model and proof–theoretic semantics. Model complexity, where applicable, seems to be an appropriate measure for evaluating inductive logic theories.  相似文献   

14.
Uncertain programming is a theoretical tool to handle optimization problems under uncertain environment, it is mainly established in probability, possibility, or credibility measure spaces. Sugeno measure space is an interesting and important extension of probability measure space. This motivates us to discuss the uncertain programming based on Sugeno measure space. We have constructed the first type of uncertain programming on Sugeno measure space, i.e. the expected value models of uncertain programming on Sugeno measure space. In this paper, the second type of uncertain programming on Sugeno measure space, i.e. chance-constrained programming on Sugeno measure space, is investigated. Firstly, the definition and the characteristic of α-optimistic value and α-pessimistic value as a ranking measure are provided. Secondly, Sugeno chance-constrained programming (SCCP) is introduced. Lastly, in order to construct an approximate solution to the complex SCCP, the ideas of a Sugeno random number generation and a Sugeno simulation are presented along with a hybrid approach.  相似文献   

15.
Goal-oriented Requirements Engineering approaches have become popular in the Requirements Engineering community as they provide expressive modelling languages for requirements elicitation and analysis. However, as a common challenge, such approaches are still struggling when it comes to managing the accidental complexity of their models. Furthermore, those models might be incomplete, resulting in insufficient information for proper understanding and implementation. In this paper, we provide a set of metrics, which are formally specified and have tool support, to measure and analyse complexity and completeness of goal models, in particular social goal models (e.g. i). Concerning complexity, the aim is to identify refactoring opportunities to improve the modularity of those models, and consequently reduce their accidental complexity. With respect to completeness, the goal is to automatically detect model incompleteness. We evaluate these metrics by applying them to a set of well-known system models from industry and academia. Our results suggest refactoring opportunities in the evaluated models, and provide a timely feedback mechanism for requirements engineers on how close they are to completing their models.  相似文献   

16.
This paper investigates the impact of risk attitude on incentives and performances in a two-stage (research stage and development stage) new product development setting with one senior executive (she) and one project manager (he). The senior executive offers a wage contract to the project manager in the presence of dual information asymmetry including his unknown idea value of a new product in early research stage and unobservable effort to convert the idea into a product in later development stage. Due to the variability of technology and market, the subjective assessments about the idea value and the revenue generated by the product are characterized as uncertain variables. Within the framework of uncertainty theory, we first present four classes of uncertain principal agent models, and then derive their respective optimal wage contract mechanisms. We find that the structures of the senior executive’s optimal mechanisms depend on the project manager’s risk attitude. If the project manager becomes more conservative, the senior executive should set a low incentive term to avert risk. Otherwise, she should do the opposite. Moreover, we identify two values: the information value of the idea—how much the senior executive is willing to pay to acquire information regarding the project manager’s idea value, and the information value of the effort—how much the senior executive ensures to win when she can contract on the project manager’s effort. Our results show that acquiring the project manager’s idea information yields the highest potential when the project manager is aggressive, but in the case of contracting on his effort, the opposite appears to be true. The results also indicate that acquiring more information on an aggressive project manager’s idea always has higher impact on the senior executive’s profits than contracting on his effort. We also provide several interesting managerial insights in new product development through our analytical and simulation results.  相似文献   

17.
k-Anonymity is a method for providing privacy protection by ensuring that data cannot be traced to an individual. In a k-anonymous dataset, any identifying information occurs in at least k tuples. To achieve optimal and practical k-anonymity, recently, many different kinds of algorithms with various assumptions and restrictions have been proposed with different metrics to measure quality. This paper evaluates a family of clustering-based algorithms that are more flexible and even attempts to improve precision by ignoring the restrictions of user-defined Domain Generalization Hierarchies. The evaluation of the new approaches with respect to cost metrics shows that metrics may behave differently with different algorithms and may not correlate with some applications’ accuracy on output data.  相似文献   

18.
19.
The fact that several web accessibility metrics exist may be evidence of a lack of a comparison framework that highlights how well they work and for what purposes they are appropriate. In this paper we aim at formulating such a framework, demonstrating that it is feasible, and showing the findings we obtained when we applied it to seven existing automatic accessibility metrics. The framework encompasses validity, reliability, sensitivity, adequacy and complexity of metrics in the context of four scenarios where the metrics can be used. The experimental demonstration of the viability of the framework is based on applying seven published metrics to more than 1500 web pages and then operationalizing the notions of validity-as-conformance, adequacy and complexity. Our findings lead us to conclude that the Web Accessibility Quantitative Metric, Page Measure and Web Accessibility Barrier are the metrics that achieve the highest levels of quality (out of the seven that we examined). Finally, since we did not analyse reliability, sensitivity and validity-in-use, this paper provides guidance to address them in what are new research avenues.  相似文献   

20.
Mathematical programming problems often contain many “balance equations”. It is possible to transform a given formulation by the elimination of balance equations; these eliminations generate different formulations which are equivalent in the sense that the same optimum can be derived from them. Some equivalent formulations for product-mix decisions were generated and solved by using APEX-III, CDC's commercial package for mathematical programming. The results show that for today's powerful software one should not trust mathematical programming folklore which states that CPU-time rises roughly in proportion to the cube of the number of constraints; a far better explanation of computational effort is possible by taking the number of constraints as well as the number of nonzeros into account.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号