首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The propensity to make programming errors and the rates of error detection and correction are dependent on program complexity. Knowledge of these relationships can be used to avoid errorprone structures in software design and to devise a testing strategy which is based on anticipated difficulty of error detection and correction. An experiment in software error data collection and analysis was conducted in order to study these relationships under conditions where the error data could be carefully defined and collected. Several complexity measures which can be defined in terms of the directed graph representation of a program, such as cyclomatic number, were analyzed with respect to the following error characteristics: errors found, time between error detections, and error correction time. Signifiant relationships were found between complexity measures and error charateristics. The meaning of directed grph structural properties in terms of the complexity of the programming and testing tasks was examined.  相似文献   

2.
A multivariate statistical procedure called multidimensional scaling is used to study the relationship of various software complexity metrics and program modules. The program modules that make up a software system are analysed and their effects towards the overall characteristics of a software are viewed. This multidimensional scaling technique is applied to a sample data set. The scaling procedure clustered the similar and dissimilar software complexity metrics. Program modules with low complexity and few errors clustered together, while modules which were complex were isolated. This technique shows promise in the identification of complex modules that potentially contain disproportionate errors prior to the testing phase. The ability of the scaling techniques to cluster similar and dissimilar characteristics is explained and graphically presented.  相似文献   

3.
Many empirical studies have found that software metrics can predict class error proneness and the prediction can be used to accurately group error-prone classes. Recent empirical studies have used open source systems. These studies, however, focused on the relationship between software metrics and class error proneness during the development phase of software projects. Whether software metrics can still predict class error proneness in a system’s post-release evolution is still a question to be answered. This study examined three releases of the Eclipse project and found that although some metrics can still predict class error proneness in three error-severity categories, the accuracy of the prediction decreased from release to release. Furthermore, we found that the prediction cannot be used to build a metrics model to identify error-prone classes with acceptable accuracy. These findings suggest that as a system evolves, the use of some commonly used metrics to identify which classes are more prone to errors becomes increasingly difficult and we should seek alternative methods (to the metric-prediction models) to locate error-prone classes if we want high accuracy.  相似文献   

4.
Evolving software programs requires that software developers reason quantitatively about the modularity impact of several concerns, which are often scattered over the system. To this respect, concern-oriented software analysis is rising to a dominant position in software development. Hence, measurement techniques play a fundamental role in assessing the concern modularity of a software system. Unfortunately, existing measurements are still fundamentally module-oriented rather than concern-oriented. Moreover, the few available concern-oriented metrics are defined in a non-systematic and shared way and mainly focus on static properties of a concern, even if many properties can only be accurately quantified at run-time. Hence, novel concern-oriented measurements and, in particular, shared and systematic ways to define them are still welcome. This paper poses the basis for a unified framework for concern-driven measurement. The framework provides a basic terminology and criteria for defining novel concern metrics. To evaluate the framework feasibility and effectiveness, we have shown how it can be used to adapt some classic metrics to quantify concerns and in particular to instantiate new dynamic concern metrics from their static counterparts.  相似文献   

5.
Spreadsheet programs are probably the most successful example of end-user software development tools and are used for a variety of purposes. Like any type of software, they are prone to error, in particular as they are usually developed by non-programmers. While various techniques exist to support the developer in finding errors in procedural programs, the tool support for spreadsheet debugging is still limited. In this paper, we show how techniques from model-based diagnosis can be applied and extended for spreadsheet debugging by translating the relevant parts of a spreadsheet to a constraint satisfaction problem. We additionally propose both problem-specific and generalizable extensions to the classical diagnosis algorithms which help to detect potential problems in a spreadsheet based on user-provided test cases more efficiently. The proposed techniques were integrated into a modular framework for spreadsheet debugging and evaluated with respect to scalability based on a number of real-world and artificially created spreadsheets. An additional error detection exercise involving 24 subjects was performed to assess the general applicability of such advanced spreadsheet debugging techniques for end users.  相似文献   

6.

Is it really better to print everything, including software models, or is it better to view them on screen? With the ever increasing complexity of software systems, software modeling is integral to software development. Software models facilitate and automate many activities during development, such as code and test case generation. However, a core goal of software modeling is to communicate and collaborate. Software models are presented to team members on many mediums and two of the most common mediums are paper and computer screens. Reading from paper or screen is ostensibly considered to have the same effect on model comprehension. However, the literature on text reading has indicated that the reading experiences can be very different which in turn effects various metrics related to reader performance. This paper reports on an experiment that was conducted to investigate the effect of reading software models on paper in comparison with reading them on a computer screen with respect to cognitive effectiveness. Cognitive effectiveness here refers to the ease by which a model reader can read a model. The experiment used a total of 74 software engineering students as subjects. The experiment results provide strong evidence that displaying diagrams on a screen allows subjects to read them quicker. There is also evidence that indicates that on screen viewing induces fewer reading errors.

  相似文献   

7.
In the paper, an approach to constructing an extensible framework for the verification of program systems is suggested. In the author’s opinion, it will facilitate application of modern rigorous verification methods to practically significant programs, the complexity of which permanently grows. This framework can also become a test harness for testing and adjustment of new techniques of formal verification and static analysis on various industrial software packages.  相似文献   

8.
9.
陶传奇  李必信  JerryGao 《软件学报》2015,26(12):3043-3061
基于构件的软件构建方法目前被广泛使用在软件开发中,用于减少软件开发的工程成本和加快软件开发进度.在软件维护过程中,由于构件更新或者新版本的发布,基于构件的系统会受到影响,需要进行回归测试.对于指定的软件修改需求,维护者可以实施不同的修改手段.不同的修改手段会导致不同的回归测试复杂性,这种复杂性是软件维护成本和有效性的重要因素.目前的研究没有强调构件软件的回归测试复杂性问题.基于修改影响复杂性模型和度量,提出一种回归测试的复杂性度量框架.该度量框架包括两个部分:基于图的模型和形式化度量计算.该度量可以有效表示构件软件分别在构件和系统层面的回归测试复杂性因素,可视化地体现复杂性变化.然后根据模型,提出具体的度量计算方式.最后,通过实验研究,针对同一个构件软件的相同修改需求,利用若干个实验组进行独立修改实施,然后比较回归测试的复杂性.实验结果表明,所提出的度量方式是可行和有效的.  相似文献   

10.
11.
12.
We apply state-of-the art deductive verification tools to check security-relevant properties of cryptographic software, including safety, absence of error propagation, and correctness with respect to reference implementations. We also develop techniques to help us in our task, focusing on methods oriented towards increased levels of automation, in scenarios where there are clear obvious limits to such automation. These techniques allow us to integrate automatic proof tools with an interactive proof assistant, where the latter is used off-line to prove once-and-for-all fundamental lemmas about properties of programs. The techniques developed have independent interest for practical deductive verification in general.  相似文献   

13.
There is a dichotomy of opinion on the use of software testing versus formal verification in software development. Testing has been the accepted method for detecting and removing errors and has played a significant error removal role. Formal verification has only recently matured into accepted practice but shows the potential for playing an even more significant error prevention role. The Cleanroom software development process which has been developed by the IBM Federal Systems Division combines both ideas into an effective development tool. Software engineering methods based on functional verification support the production of software with sufficient quality to forego traditional unit or structural testing. Statistical methods are introduced that define objective and formal strategies for product or functional testing. The synergism between the two ideas results in software with fewer errors which are both easier to find and to fix and in products with exceptional operating characteristics. Error prevention, not removal, is the key and the only viable approach to any sustained software quality growth. The Cleanroom development method and its impact on the error prevention and removal processes are covered in this paper. The results from its use for software development are also discussed.  相似文献   

14.
Forward     
A number of architectures can be used to integrate different components of a manufacturing enterprise such as machine tools, robots, and guided vehicles. The choice of architecture has a significant impact on system complexity, which in turn determines properties such as scalability, flexibility, fault-tolerance and modifiability. There is a need to develop metrics that quantify the complexity of a system that can serve as a means for comparing alternative architecture at the design stage. In this paper, we propose metrics used in software engineering to characterize the complexity of manufacturing systems. These metrics have been applied for measuring the complexity of two software systems: material delivery system and distributed scheduling.  相似文献   

15.
This paper defines two suites of metrics, which address static and dynamic aspects of component assembly. The static metrics measure complexity and criticality of component assembly, wherein complexity is measured using Component Packing Density and Component Interaction Density metrics. Further, four criticality conditions namely, Link, Bridge, Inheritance and Size criticalities have been identified and quantified. The complexity and criticality metrics are combined to form a Triangular Metric, which can be used to classify the type and nature of applications. Dynamic metrics are collected during the runtime of a complete application. Dynamic metrics are useful to identify super-component and to evaluate the degree of utilization of various components. In this paper both static and dynamic metrics are evaluated using Weyuker’s set of properties. The result shows that the metrics provide a valid means to measure issues in component assembly. We relate our metrics suite with McCall’s Quality Model and illustrate their impact on product quality and to the management of component-based product development.  相似文献   

16.
错误传播是分析可靠性系统不确定性中的一基本问题,可用于发现系统中最易受到错误攻击的部分及各部分之间的相互影响.分别在信号和模块级别上研究了错误在软件中的传播过程,并定义了描述此过程的参数及其计算方法,其中首次提出了模块泄漏率和活动率的概念并给出了计算方法;然后把该错误传播分析框架应用于某卫星光纤陀螺捷联航姿控制系统上.通过故障注入实验确定了其中的分析参数,验证了提出的错误传播框架的可行性与正确性.  相似文献   

17.
针对软件测试和静态程序验证中存在的连续性程序执行验证和推理问题,提出一个基于程序插桩和布尔逻辑的运行时程序验证框架——RPA。定义一种用于描述运行时程序性质和规范的动态逻辑语言RPAL,实现自动化插桩以收集运行时程序状态信息,设计一个支持高效验证的句子调度算法。实验结果表明,结合合适的谓词扩展,RPA可以有效地验证和分析软件逻辑,发现潜在的软件错误。  相似文献   

18.
The problem of verifying software systems that use dynamic data structures (such as linked lists, queues, or binary trees) has attracted increasing interest over the last decade. Dynamic structures are not easily supported by verification techniques because, among other reasons, it is difficult to efficiently manage the pointer-based internal representation. This is a key aspect when, for instance, the goal is to construct a verification tool based on model checking techniques. In addition, since new nodes can be dynamically inserted or extracted from the structure, the shape of the dynamic data (and other more specific properties) may vary at runtime, with errors such as the non desirable sharing between two nodes being difficult to detect. In this paper, we propose to use mu-calculus to describe and analyze with model checking techniques dynamic data structures such as lists and trees. The expressiveness of mu-calculus makes it possible to naturally describe these structures. In addition, following the ideas of separation logic, the logic has been extended with a new operator capable of describing the non-sharing property, which is essential when analyzing dynamic data structures.  相似文献   

19.
A major portion of the effort expended in developing commercial software today is associated with program testing. Schedule and/ or resource constraints frequently require that testing be conducted so as to uncover the greatest number of errors possible in the time allowed. In this paper we describe a study undertaken to assess the potential usefulness of various product-and process-related measures in identifying error-prone software. Our goal was to establish an empirical basis for the efficient utilization of limited testing resources using objective, measurable criteria. Through a detailed analysis of three software products and their error discovery histories, we have found simple metrics related to the amount of data and the structural complexity of programs to be of value for this purpose.  相似文献   

20.
An experiment comparing the effectiveness of the all-uses and all-edges test data adequacy criteria is discussed. The experiment was designed to overcome some of the deficiencies of previous software testing experiments. A large number of test sets was randomly generated for each of nine subject programs with subtle errors. For each test set, the percentages of executable edges and definition-use associations covered were measured, and it was determined whether the test set exposed an error. Hypothesis testing was used to investigate whether all-uses adequate test sets are more likely to expose errors than are all-edges adequate test sets. Logistic regression analysis was used to investigate whether the probability that a test set exposes an error increases as the percentage of definition-use associations or edges covered by it increases. Error exposing ability was shown to be strongly positively correlated to percentage of covered definition-use associations in only four of the nine subjects. Error exposing ability was also shown to be positively correlated to the percentage of covered edges in four different subjects, but the relationship was weaker  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号