首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Properties of software measures   总被引:1,自引:0,他引:1  
During recent years much attention has been directed towards the measurement of the properties and the complexity of software. The major goal of using software measures is to get reliable software and an objective representation of the properties of software and the software development process by numbers. Many software measures have been developed in order to determine the static complexity of single programs (intra-modular complexity) and of entire software systems (inter-modular complexity) during the phases of the software lifecycle. As a consequence of these developments many authors have discussed the properties of software measures. In this paper a measurement theory is introduced which gives conditions for the properties of measures. The properties of software measures and conditions of the use of software measures as an ordinal and ratio scale are given. As an example, these are applied to the measures of McCabe.Because composition and decomposition operations are major strategies in software development, theorems which describe the properties of software measures related to this type of operations are also presented. Properties of software measures, as required by masses in the literature, are discussed and explained with statements of measurement theory. The results show that it is possible to explain most of the required properties of software measures in the literature with conditions of measurement theory. This makes the properties of software measures during the software lifecycle and their application in practice more visible.  相似文献   

2.
David  Henry  Dawn  Maurizio   《Journal of Systems and Software》2009,82(11):1793-1803
While challenging, the ability to predict faulty modules of a program is valuable to a software project because it can reduce the cost of software development, as well as software maintenance and evolution. Three language-processing based measures are introduced and applied to the problem of fault prediction. The first measure is based on the usage of natural language in a program’s identifiers. The second measure concerns the conciseness and consistency of identifiers. The third measure, referred to as the QALP score, makes use of techniques from information retrieval to judge software quality. The QALP score has been shown to correlate with human judgments of software quality.Two case studies consider the language processing measures applicability to fault prediction using two programs (one open source, one proprietary). Linear mixed-effects regression models are used to identify relationships between defects and the measures. Results, while complex, show that language processing measures improve fault prediction, especially when used in combination. Overall, the models explain one-third and two-thirds of the faults in the two case studies. Consistent with other uses of language processing, the value of the three measures increases with the size of the program module considered.  相似文献   

3.
In attempting to describe the quality of computer software, one of the more frequently mentioned measurable attributes is complexity of the flow of control. During the past several years, there have been many attempts to quantify this aspect of computer programs, approaching the problem from such diverse points of view as graph theory and software science. Most notable measures in these areas are McCabe's cyclomatic complexity and Halstead's software effort. More recently, Woodward et al. proposed a complexity measure based on the number of crossings, or "knots," of arcs in a linearization of the flowgraph.  相似文献   

4.
Software complexity measures are quantitative estimates of the amount of effort required by a programmer to comprehend a piece of code. Many measures have been designed for standard procedural languages, but little work has been done to apply software complexity concepts to nontraditional programming paradigms. This paper presents a collection of software complexity measures that were specifically designed to quantify the conceptual complexity of rule-based programs. These measures are divided into two classes: bulk measures, which estimate complexity by examining aspects of program size, and rule measures, which gauge complexity based on the ways in which program rules interact with data and other rules. A pilot study was conducted to assess the effectiveness of these measures. Several measures were found to correlate well with the study participants' ratings of program difficulty and the time required by them to answer questions that required comprehension of program elements. The physical order of program rules was also shown to affect comprehension. The authors conclude that the development of software complexity measures for particular programming paradigms may lead to better tools for managing program development and predicting maintenance effort in nontraditional programming environments  相似文献   

5.
Software systems must change to adapt to new functional requirements and nonfunctional requirements. According to Lehman''s laws of software evolution, on the one side, the size and the complexity of a software system will continually increase in its life time; on the other side, the quality of a software system will decrease unless it is rigorously maintained and adapted. Lehman''s laws of software evolution, especially of those on software size and complexity, have been widely validated. However, there are few empirical studies of Lehman''s law on software quality evolution, despite the fact that quality is one of the most important measurements of a software product. This paper defines a metric---accumulated defect density---to measure the quality of evolving software systems. We mine the bug reports and measure the size and complexity growth of four evolution lines of Apache Tomcat and Apache Ant projects. Based on these studies, Lehman''s law on software quality evolution is examined and evaluated.  相似文献   

6.
Software comprehension is one of the largest costs in the software lifecycle. In an attempt to control the cost of comprehension, various complexity metrics have been proposed to characterize the difficulty of understanding a program and, thus, allow accurate estimation of the cost of a change. Such metrics are not always evaluated. This paper evaluates a group of metrics recently proposed to assess the "spatial complexity" of a program (spatial complexity is informally defined as the distance a maintainer must move within source code to build a mental model of that code). The evaluation takes the form of a large-scale empirical study of evolving source code drawn from a commercial organization. The results of this investigation show that most of the spatial complexity metrics evaluated offer no substantially better information about program complexity than the number of lines of code. However, one metric shows more promise and is thus deemed to be a candidate for further use and investigation.  相似文献   

7.
In this paper, we investigate how to incorporate program complexity measures with a software quality model. We collect software complexity metrics and fault counts from each build during the testing phase of a large commercial software system. Though the data are limited in quantity, we are able to predict the number of faults in the next build. The technique we used is called times series analysis and forecasting. The methodology assumes that future predictions are based on the history of past observations. We will show that the combined complexity quality model is an improvement over the simpler quality only model. Finally, we explore how the testing process used in this development may be improved by using these predictions and suggest areas for future research.  相似文献   

8.
McCabe  T. 《Software, IEEE》1996,13(3):115-117
The year 2000 problem is omnipresent, fast approaching, and will present us with something we're not used to: a deadline that can't slip. It will also confront us with two problems, one technical, the other managerial. My cyclomatic complexity measure, implemented using my company's tools, can address both of these concerns directly. The technical problem is that most of the programs using a date or time function have abbreviated the year field to two digits. Thus, as the rest of society progresses into the 21st century, our software will think it's the year 00. The managerial problem is that date references in software are everywhere; every line of code in every program in every system will have to be examined and made date compliant. In this article, I elaborate on an adaptation of the cyclomatic complexity measure to quantify and derive the specific tests for date conversion. I originated the use of cyclomatic complexity as a software metric. The specified data-complexity metric is calculated by first removing all control constructs that do not interact with the referenced data elements in the specified set, and then computing cyclomatic complexity. Specifying all global data elements gives an external coupling measure that determines encapsulation. Specifying all the date elements would quantify the effort for a year-2000 upgrade. This effort will vary depending on the quality of the code that must be changed  相似文献   

9.
原子  于莉莉  刘超 《软件学报》2014,25(11):2499-2517
软件在其生命周期中不断地发生变更,以适应需求和环境的变化。为了及时预测每次变更是否引入了缺陷,研究者们提出了面向软件源代码变更的缺陷预测方法。然而现有方法存在以下3点不足:(1)仅实现了较粗粒度(事务级和源文件级变更)的预测;(2)仅采用向量空间模型表征变更,没有充分挖掘蕴藏在软件库中的程序结构、自然语言语义以及历史等信息;(3)仅探讨较短时间范围内的预测,未考虑在长时间软件演化过程中由于新需求或人员重组等外界因素所带来的概念漂移问题。针对现有的不足,提出一种面向源代码变更的缺陷预测方法。该方法将细粒度(语句级)变更作为预测对象,从而有效降低了质量保证成本;采用程序静态分析和自然语言语义主题推断相结合的技术深入挖掘软件库,从变更的上下文、内容、时间以及人员4个方面构建特征集,从而揭示了变更易于引入缺陷的因素;采用特征熵差值矩阵分析了软件演化过程中概念漂移问题的特点,并通过一种伴随概念回顾的动态窗口学习机制实现了长时间的稳定预测。通过6个著名开源软件验证了该方法的有效性。  相似文献   

10.
ContextComplexity measures provide us some information about software artifacts. A measure of the difficulty of testing a piece of code could be very useful to take control about the test phase.ObjectiveThe aim in this paper is the definition of a new measure of the difficulty for a computer to generate test cases, we call it Branch Coverage Expectation (BCE). We also analyze the most common complexity measures and the most important features of a program. With this analysis we are trying to discover whether there exists a relationship between them and the code coverage of an automatically generated test suite.MethodThe definition of this measure is based on a Markov model of the program. This model is used not only to compute the BCE, but also to provide an estimation of the number of test cases needed to reach a given coverage level in the program. In order to check our proposal, we perform a theoretical validation and we carry out an empirical validation study using 2600 test programs.ResultsThe results show that the previously existing measures are not so useful to estimate the difficulty of testing a program, because they are not highly correlated with the code coverage. Our proposed measure is much more correlated with the code coverage than the existing complexity measures.ConclusionThe high correlation of our measure with the code coverage suggests that the BCE measure is a very promising way of measuring the difficulty to automatically test a program. Our proposed measure is useful for predicting the behavior of an automatic test case generator.  相似文献   

11.
余峰  陈刚 《计算机工程与应用》2003,39(34):108-110,229
单元测试的目标是检验程序模块的正确性,从而为集成测试、系统测试提供符合预期效果的部件。随着对软件质量保证的要求提高,许多单元测试的技术已相继提出。但由于软件运行环境复杂度不断增加以及软件测试技术与软件工程的结合日益紧密,对单元测试框架技术的研究有了新的需求。论文结合国际软件测试标准,讨论了面向增量式开发的虚拟单元测试框架组成,给出了一个能够提高软件弹性,保证软件质量,测试环境独立、简单的单元测试解决方案。  相似文献   

12.
软件设计复杂性度量   总被引:3,自引:0,他引:3  
  相似文献   

13.
Sun-Jen Huang  Richard Lai 《Software》1998,28(14):1465-1491
Communication software systems have become very large and complex. Recognizing the complexity of such software systems is a key element in their development activities. Software metrics are useful quantitative indicators for assessing and predicting software quality attributes, like complexity. However, most of existing metrics are extracted from source programs at the implementation phase of the software life cycle. They cannot provide early feedback during the specification phase; and subsequently it is difficult and expensive to make changes to the system, if so indicated by the metrics. It is therefore important to be able to measure system complexity at the specification phase. However, most software specifications are written in natural languages from which metrics information is very hard to extract. In this paper, we describe how complexity information can be derived from a formal communication protocol specification written in Estelle so that it is possible to predict the complexity of its implementation and subsequently its development can be better managed. © 1998 John Wiley & Sons, Ltd.  相似文献   

14.
Two types of models can assist the information system manager in gaining greater insight into the system development process. They are: isomorphic models that represent cause-effect relationships between certain conditions (e.g., structured techniques) and certain observable states (e.g., productivity change); and paramorphic models that describe an outcome but do not describe the processes or variables that influence the outcome (e.g., estimation of project time or cost). The two models are shown to be interrelated since the relationships of the first model are determinants of the parameters of the second model.IS managers can make significant contributions by developing isomorphic models tailored to their own organizations. However, metrics that measure relevant characteristics of programs and systems are required before substantial progress can be made. Although some initial attempts have been made to develop metris for program quality, program complexity, and programmer skill, much more work remains to be done. In addition, other metries must be developed that will require the involvement of personnel, not only in the computer sciences, but also in information systems, the behavioral sciences, and IS management.  相似文献   

15.
The rising costs of software development and maintenance have naturally aroused intere5t in tools and measures to quantify and analyze software complexity. Many software metrics have been studied widely because of the potential usefulness in predicting the complexity and quality of software. Most of the work reported in this area has been related to nonreal-time software. In this paper we report and discuss the results of an experimental investigation of some important metrics and their relationship for a class of 202 Pascal programs used in a real-time distributed processing environment. While some of our observations confirm independent studies, we have noted significant differences. For instance the correlations between McCabe's control complexity measure and Halstead's metrics are low in comparison to a previous study. Studies of the type reported here are important for understanding the relationship between software metrics.  相似文献   

16.
Testability measures have been defined on flowgraphs modelling the control flow through a program. These measures attempt to quantify aspects of the structural complexity of code that might give useful information about the testing stage of software production. This paper shows how two such metrics, the Number of Trails metric and the Mask [k = 2] metric, can be calculated axiomatically.  相似文献   

17.
The rising demand and cost of software have prompted researchers to investigate factors associated with program changes in software systems. Numerous program complexity measures and statistical techniques have been used to study the effects of program complexity on program changes. The effects of programming methodology on the relationship of complexity to program changes were measured in this study. The results suggest that the relationship of length and structure complexity characteristics to program changes is consistent for different programming methodologies, while the relationship of program changes and characteristics that relate to the use of data and procedure names is not consistent for different programming methodologies.  相似文献   

18.
软件体系结构的使用是提高软件开发质量、减少软件开销和促进软件生产率提高的最有效方法之一。对软件体系结构的研究也开始超出传统的对软件设计阶段的支持,并逐步扩展到整个软件生命周期。采用定性分析、比较研究等多种方法,阐述软件体系结构研究的基本内容及软件体系结构实践等相关内容。首先给出了软件体系结构的定义,介绍了软件体系结构风格,然后从软件生命周期的角度阐述了软件体系结构实践及相关内容,最后总结了软件体系结构的研究现状与发展趋势。  相似文献   

19.
软件体系结构的使用是提高软件开发质量、减少软件开销和促进软件生产率提高的最有效方法之一。对软件体系结构的研究也开始超出传统的对软件设计阶段的支持,并逐步扩展到整个软件生命周期。采用定性分析、比较研究等多种方法,阐述软件体系结构研究的基本内容及软件体系结构实践等相关内容。首先给出了软件体系结构的定义,介绍了软件体系结构风格,然后从软件生命周期的角度阐述了软件体系结构实践及相关内容,最后总结了软件体系结构的研究现状与发展趋势。  相似文献   

20.
白明洋  丁争 《测控技术》2014,33(6):111-115
随着软件产业的不断发展,软件不断向系统化、集成化发展,软件所实现的功能越来越强大,复杂程度越来越高,最终导致软件质量越来越难以保证。软件测试性分析能够提供软件测试性信息,通过这些信息设计人员能在设计和测试执行之前确定测试的难易程度和所需资源,从而决定是否需要修改设计以得到一个更易测试的软件。以信息论软件测试性分析方法为基础,将程序转化为信息转换图进而利用信息论的方法进行测试性分析,最后引入模糊综合评价的方法对分析结果进行评价,并通过实例对该方法的有效性进行验证。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号