首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In attempting to describe the quality of computer software, one of the more frequently mentioned measurable attributes is complexity of the flow of control. During the past several years, there have been many attempts to quantify this aspect of computer programs, approaching the problem from such diverse points of view as graph theory and software science. Most notable measures in these areas are McCabe's cyclomatic complexity and Halstead's software effort. More recently, Woodward et al. proposed a complexity measure based on the number of crossings, or "knots," of arcs in a linearization of the flowgraph.  相似文献   

2.
Number of Faults per Line of Code   总被引:1,自引:0,他引:1  
In this note, the number of faults or "bugs" per line of code is estimated based upon Halstead's software science relationships. This number is shown to be an increasing function of the number of lines of code in a program, a result in agreement with intuition and some current theories of complexity. The form of this function is investigated and an easy-to-use approximation is developed. An application to a moderately large software project is shown in which the predicted number of faults for program modules of various sizes agrees fairly well with the actual numbers of faults discovered.  相似文献   

3.
Numerous studies have confirmed the skewnessof Halstead's Software Science Length Estimator (Beser, 1983;Gonzales, 1990). The Length estimator consistently underestimatesthe size of `small' programs (program size < 400tokens), and overestimates the size of `large' programs (programsize > 4000 tokens). This paper verifies and correctsthe Halstead Length Estimator skewness for a large collectionof `C' programs of varying sizes.  相似文献   

4.
The problems of developing software requirements and quality assurance techniques have basically dealt with an environment where a single organization acts as the designer, developer, and user of the software product. Since the mid-1970' s, however, there has been a great increase in the use of "packaged" software products designed and developed by one organization for use in a variety of other organizations. The great profusion of products has resulted in many products being peddled for generic applications (accounting, manufacturing, etc.) which are of questionable quality and/or "fit" to a given organization's environment. This paper describes some techniques that are being used to certify software produced by third parties and how to determine if the "fit" is there. Current quality assurance techniques deal with the "correctness" of a program as compared to its specifications [2], [4], [7], [8], [12]. The real issue for a purchaser of software is whether the software is "correct" for its environment.  相似文献   

5.
Software Structure Metrics Based on Information Flow   总被引:2,自引:0,他引:2  
Structured design methodologies provide a disciplined and organized guide to the construction of software systems. However, while the methodology structures and documents the points at which design decisions are made, it does not provide a specific, quantitative basis for making these decisions. Typically, the designers' only guidelines are qualitative, perhaps even vague, principles such as "functionality," "data transparency," or "clarity." This paper, like several recent publications, defines and validates a set of software metrics which are appropriate for evaluating the structure of large-scale systems. These metrics are based on the measurement of information flow between system components. Specific metrics are defined for procedure complexity, module complexity, and module coupling. The validation, using the source code for the UNIX operating system, shows that the complexity measures are strongly correlated with the occurrence of changes. Further, the metrics for procedures and modules can be interpreted to reveal various types of structural flaws in the design and implementation.  相似文献   

6.
目前设计的超声速风洞控制系统软件采集精准度低,软件程序安全性差;基于PMAC开发设计了一种新的超声速风洞控制系统软件,通过Windows XP平台中的语言编辑器开发与调试程序,选择PMAC运动控制卡操作控制程序,利用XP系统进行程序协调,完成程序内的上位机与通信模型的数据编写,在64种功能函数下计算和提取程序代码,根据3个独立的风动参数进行PID计算,应用集成软件统一配置计算机中的软件资源,确保在程序的控制下其他软件程序可以自行执行相应的工作,以Fame View组态软件作为此程序的运行基础,采用C++语言编写,开发监控程序;实验结果表明,基于PMAC的超声速风洞控制系统软件采集精准度得到了提高,监测范围较大,且监测波动较稳定,有效保证了软件程序的安全性。  相似文献   

7.
This paper presents an assessment of several published statistical regression models that relate software development effort to software size measured in function points. The principal concern with published models has to do with the number of observations upon which the models were based and inattention to the assumptions inherent in regression analysis. The research describes appropriate statistical procedures in the context of a case study based on function point data for 104 software development projects and discusses limitations of the resulting model in estimating development effort. The paper also focuses on a problem with the current method for measuring function points that constrains the effective use of function points in regression models and suggests a modification to the approach that should enhance the accuracy of prediction models based on function points in the future  相似文献   

8.
In this paper a new approach for modelling program variants (versions) is proposed, which is focused on increasing the level of software reuse, rather than on enriching data model. In this approach, a program is composed of a program body and a set of logically independent program contexts. The program body contains global functions and global data structures. Each program context contains exactly one variant of every class defined in the program and one variant of every non-global function, called a context function. Variants of the same class/function belonging to different contexts need not be different. During the program execution only one context is active. It may, however, be changed dynamically at program run-time. Thus, at a particular moment of time the program is viewed as a sum of the program body and exactly one program context.  相似文献   

9.
This paper aims to present an ontology model of software engineering to represent its knowledge. The fundamental knowledge relating to software engineering is well described in the textbook entitled Software Engineering by Sommerville that is now in its eighth edition [1] and the white paper, Software Engineering Body of Knowledge (SWEBOK), by the IEEE [2] upon which software engineering ontology is based. This paper gives an analysis of what software engineering ontology is, what it consists of, and what it is used for in the form of usage example scenarios. The usage scenarios presented in this paper highlight the characteristics of the software engineering ontology. The software engineering ontology assists in defining information for the exchange of semantic project information and is used as a communication framework. Its users are software engineers sharing domain knowledge as well as instance knowledge of software engineering.  相似文献   

10.
提出一种基于功能点分析法的软件功能测试规模估算模型。该模型适用于黑盒功能测试,用于系统测试阶段或验收测试阶段工作量的估算,基本估算步骤包括估算软件规模、定义规模因子、计算测试规模,并在项目中进行实践应用。结果表明,该模型可以较好地估算软件功能测试规模,可用于测试计划的制定及实施。  相似文献   

11.
The ‘compound Poisson’ (CP) software reliability model was proposed previously by the first named author for time-between-failure data in terms of CPU seconds, using the ‘maximum likelihood estimation’ (MLE) method to estimate unknown parameters; hence, CPMLE. However, another parameter estimation technique is proposed under ‘nonlinear regression analysis’ (NLR) for the compound Poisson reliability model, giving rise to the name CPNLR. It is observed that the CP model, with different parameter estimation methods, produces equally satisfactory or more favourable results as compared to the Musa–Okumoto (M–O) model, particularly in the event of grouped or clustered (clumped) software failure data. The sampling unit may be a week, day or month within which the failures are clumped, as the error recording facilities dictate in a software testing environment. The proposed CPNLR and CPMLE yield comparatively more favourable results for certain software failure data structures where the frequency distribution of the cluster (clump) size of the software failures per week displays a negative exponential behaviour. Average relative error (ARE), mean squared error (MSE) and average Kolmogorov–Smirnov (K–S Av.Dn) statistics are used as measures of forecast quality for the proposed and competing parameter-estimation techniques in predicting the number of remaining future failures expected to occur until a target stopping time. Comparisons on five different simulated data sets that contain weekly recorded software failures are made to emphasize the advantages and disadvantages of the competing methods by means of the chronological prediction plots around the true target value and zero per cent relative error line. The proposed generalized compound Poisson (MLE and NLR) methods consistently produce more favourable predictions for those software failure data with negative exponential frequency distribution of the failure clump size versus number of weeks. Otherwise, the popularly used competing M–O log-Poisson model is a better fit for those data with a uniform clump size distribution to recognize the log-Poisson effect while the logarithm of the Poisson equation is a constant, hence uniform. The software analyst is urged to perform exploratory data analysis to recognize the nature of the software failure data before favouring a particular reliability estimation method. © 1997 by John Wiley & Sons, Ltd.  相似文献   

12.
1IntroductionAutomaticparallelexecutionofdeclarativelanguageprograms(e.g.functionprogramsandlogicprograms)isattractive,asitmakestheuseofparallelcomputersveryeasy,andtheprogrammerneednotbeconcernedwiththespecificsoftheunderlyingparallelarchitecture.However,ifseveralprocessorsareexecutingconcurrently,exploitingadaptiveparallelismishardduetonon-determinismoftaskgranularityanddatadependenciesamongtasks.TheearlysolutionproposedbyConeryandKibler[2]usesanorderingalgorithmtodeterminedependenciesatrun…  相似文献   

13.
Slicing Software for Model Construction   总被引:8,自引:0,他引:8  
Applying finite-state verification techniques (e.g., model checking) to software requires that program source code be translated to a finite-state transition system that safely models program behavior. Automatically checking such a transition system for a correctness property is typically very costly, thus it is necessary to reduce the size of the transition system as much as possible. In fact, it is often the case that much of a program's source code is irrelevant for verifying a given correctness property.In this paper, we apply program slicing techniques to remove automatically such irrelevant code and thus reduce the size of the corresponding transition system models. We give a simple extension of the classical slicing definition, and prove its safety with respect to model checking of linear temporal logic (LTL) formulae. We discuss how this slicing strategy fits into a general methodology for deriving effective software models using abstraction-based program specialization.  相似文献   

14.
Several popular cost estimation models like COCOMO and function points use adjustment variables, such as software complexity and platform, to modify original estimates and arrive at final estimates. Using data on 666 programs from 15 software projects, this study empirically tests a research model that studies the influence of three adjustment variables—software complexity, computer platform, and program type (batch or online programs) on software effort. The results confirm that all the three adjustment variables have a significant effect on effort. Further, multiple comparison of means also points to two other results for the data examined. Batch programs involve significantly higher software effort than online programs. Programs rated as complex have significantly higher effort than programs rated as average.  相似文献   

15.
The rising costs of software development and maintenance have naturally aroused intere5t in tools and measures to quantify and analyze software complexity. Many software metrics have been studied widely because of the potential usefulness in predicting the complexity and quality of software. Most of the work reported in this area has been related to nonreal-time software. In this paper we report and discuss the results of an experimental investigation of some important metrics and their relationship for a class of 202 Pascal programs used in a real-time distributed processing environment. While some of our observations confirm independent studies, we have noted significant differences. For instance the correlations between McCabe's control complexity measure and Halstead's metrics are low in comparison to a previous study. Studies of the type reported here are important for understanding the relationship between software metrics.  相似文献   

16.
The construction of large software systems is always achieved through assembly of independently written components — program modules. For these software components to work together, they must share a common set of data types and principles for representing structured data such as arrays of values and files. This common set of tools for creating and operating on data objects is provided by the infrastructure of the computer system: the hardware, operating system and runtime code. Because the nature and properties of these tools are crucial for correct operation of software components and their inter-operation, it is essential to have a precise specification that may be used for verifying correctness of application software on one hand, and to verify correctness of system behavior on the other. We call such a specification a program execution model (PXM). It is evident that the properties of the PXM implemented by a computer system can have serious impact on the ability of application programmers to practice modular software construction. This paper discusses the concept of program execution models and presents a set of principles that a PXM must satisfy to provide a sound basis for modular software construction. Because parallel program execution on computer systems with many processing units is an essential part of contemporary computing environments, the expression of parallelism and modular software construction using components involving parallel operations is included in this treatment. The conclusion is that it is possible to build computer systems that implement a PXM within which any parallel program may be used, unmodified, as a component for building more substantial parallel programs.  相似文献   

17.
民用无人机飞行控制器软件设计   总被引:2,自引:1,他引:1  
介绍了民用摄像无人机飞行控制器软件的功能需求。基于数据流图设计了软件的逻辑模型,利用结构化方法设计了软件的模块构成。在程序权限分配和接口设计时采用了智能递阶控制思想。程序的多运行模式流程实现采用了多个互斥主循环加中断的结构。最终设计的飞行控制软件具有4个功能层次,能够在飞控、测试和调参3种模式下运行。介绍了软件设计中的几项专门技术。所设计的软件具有功能集成度高、稳定性好、可扩展性强等特点。  相似文献   

18.
In this empirical study, we evaluate the extent to which a set of software measures are correlated with the number of faults and the total estimated repair effort for a large software system. The measures we use are basic counts reflecting program size and structure and metrics proposed by McCabe and Halstead. The effect of program size has a major influence on these metrics, and we present a suitable method of adjusting the metrics for size. In modeling faults or repair effort as a function of one variable, a number of measures individually explain approximately one-quarter of the variation observed in the fault data. No one measure does significantly better than size in explaining the variation in faults found across software units, and thus multiple variable models are necessary to find metrics of importance in addition to program size. The “best” multivariate model explains approximately one-half the variation in the fault data. The metrics included in this model (in addition to size) are: the ratio of block comments to total lines of code, the number of decisions per function, and the relative vocabulary of program variables and operators. These metrics have potential for future use in the quality control of software.  相似文献   

19.
Software operational profile (SOP) is used in software reliability prediction,software quality assessment,performance analysis of software,test case allocation,determination of"when to stop testing,"etc.Due to the limited data resources and large efforts required to collect and convert the gathered data into point estimates,reluctance is observed by the software professionals to develop the SOP.A framework is proposed to develop SOP using fuzzy logic,which requires usage data in the form of linguistics.The resulting profile is named fuzzy software operational profile (FSOP).Based on this work,this paper proposes a generalized approach for the allocation of test cases,in which occurrence probability of operations obtained from FSOP are combined with the criticality of the operations using fuzzy inference system (FIS).Traditional methods for the allocation of test cases do not consider the application in which software operates.This is intuitively incorrect.To solve this problem,allocation of test cases with respect to software application using the FIS model is also proposed in this paper.  相似文献   

20.
丁卫涛  徐开勇 《计算机科学》2016,43(1):202-206, 225
为了准确合理地评价软件可信性,提出了基于软件行为的可信评价模型。首先,在软件行为迹中设置监控点,根据监控点各属性的性质及其在可信评价系统中的作用,将监控点的属性分为控制流和数据流两级。其次,针对控制流级属性,提出基于支持向量机(Support Vector Machine,SVM)的软件行为迹的评价方法;针对数据流级属性,提出基于模糊层次分析法的场景属性评价方法。最后,实验分析表明,基于软件行为的可信评价模型能够准确地评价 软件可信性,并且具有较高的效率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号