首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper discusses the need for measures of complexity and unstructuredness of programs. A simple language independent concept is put forward as a measure of control flow complexity in program text and is then developed for use as a measure of unstructuredness. The proposed metric is compared with other metrics, the most notable of which is the cyclomatic complexity measure. Some experience with automatic tools for obtaining these metrics is reported.  相似文献   

2.
3.
Application of neural networks for predicting program faults   总被引:1,自引:0,他引:1  
Accurately predicting the number of faults in program modules is a major problem in the quality control of large software development efforts. Some software complexity metrics are closely related to the distribution of faults across program modules. Using these relationships, software engineers develop models that provide early estimates of quality metrics that do not become available until late in the development cycle. By considering these early estimates, software engineers can take actions to avoid or prepare for emerging quality problems. Most often, the predictive models are based upon multiple regression analysis. However, measures of software quality and complexity exhibit systematic departures from the assumptions of these analyses. With extreme violations of these assumptions, multiple regression models become unstable and lose most of their predictive quality. Since neural network models carry no data assumptions, these models could be more appropriate than regression models for modeling software faults. In this paper, we explore a neural network methodology for developing models that predict the number of faults in program modules. We apply this methodology to develop neural network models based upon data collected during the development of two commercial software systems. After developing neural network models, we apply multiple linear regression methods to develop regression models on the same data. For the data sets considered, the neural network methodology produced better predictive models in terms of both quality of fit and predictive quality.  相似文献   

4.
This paper describes a tool for debugging programs which develop faults after they have been modified or are ported to other computer systems. The tool enhances the traditional debugging approach by automating the comparison of data structures between two running programs. Using this technique, it is possible to use early versions of a program which are known to operate correctly to generate values for comparison with the new program under development. The tool allows the reference code and the program being developed to execute on different computer systems by using open distributed systems techniques. A data visualisation facility allows the user to view the differences in data structures. By using the data flow of the code, it is possible to locate faulty sections of code rapidly. An evaluation is performed by using three case studies to illustrate the power of the technique.  相似文献   

5.
A finite-strip geometric nonlinear analysis is presented for elastic problems involving folded-plate structures. Compared with the standard finite-element method, its main advantages are in data preparation, program complexity, and execution time. The finite-strip method, which satisfies the von Karman plate equations in the nonlinear elastic range, leads to the coupling of all harmonics. However, coupling of series terms dramatically increases computation time in existing finite-strip sequential programs when a large number of series terms is used. The research reported in this paper combines various parallelization techniques and architectures (computing clusters and graphic processing units) with suitable programming models (MPI and CUDA) to speed up lengthy computations. In addition, a metric expressing the computational weight of input sets is presented. This metric allows computational complexity comparison of different inputs.  相似文献   

6.

Spectrum-based fault localization (SFL) techniques have shown considerable effectiveness in localizing software faults. They leverage a ranking metric to automatically assign suspiciousness scores to certain entities in a given faulty program. However, for some programs, the current SFL ranking metrics lose effectiveness. In this paper, we introduce ConsilientSFL that is served to synthesize a new ranking metric for a given program, based on a customized combination of a set of given ranking metrics. ConsilientSFL can be significant since it demonstrates the usage of voting systems into a software engineering task. First, several mutated, faulty versions are generated for a program. Then, the mutated versions are executed with the test data. Next, the effectiveness of each existing ranking metric is computed for each mutated version. After that, for each mutated version, the computed existing metrics are ranked using a preferential voting system. Consequently, several top metrics are chosen based on their ranks across all mutated versions. Finally, the chosen ranking metrics are normalized and synthesized, yielding a new ranking metric. To evaluate ConsilientSFL, we have conducted experiments on 27 subject programs from Code4Bench and Siemens benchmarks. In the experiments, we found that ConsilientSFL outperformed every single ranking metric. In particular, for all programs on average, we have found performance measures recall, precision, f-measure, and percentage of code inspection, to be nearly 7, 9, 12, and 5 percentages larger than using single metrics, respectively. The impact of this work is twofold. First, it can mitigate the issue with the choice and usage of a proper ranking metric for the faulty program at hand. Second, it can help debuggers find more faults with less time and effort, yielding higher quality software.

  相似文献   

7.
Parallel programs are intrinsically non-deterministic, and therefore the techniques of cyclical debugging that are commonly used for sequential programs are not suitable for parallel ones. This paper proposes a method to reproduce Occam program behaviour. Saving information on the timer values input by the program and the guards selected at run-time on alternative commands allows program replay, i.e. it makes it possible to re-execute the program deterministically with the same inputs following the same instruction path. This enables the software developer to use tools such as debuggers and intrusive monitors to help identify program faults. After discussing possible implementations of the proposed technique, IRD (an interactive replay debugger for Occam programs) is described. Finally, the use of the IRD in a sample debug session is presented as an example.  相似文献   

8.
We consider the time-dependent demands for data movement that a parallel program makes on the architecture that executes it. The result is an architecture-independent metric that represents the temporal behavior of data-movement requirements. Programs are described as series of computations and data movements, and while message passing is not ruled out, we focus on explicit parallel programs using a fixed number of processes in a distributed shared-memory environment. Operations are assumed to be explicitly allocated to processors when the metric is applied, which might correspond to intermediate code in a parallelizing compiler. The metric is called the interprocess read (IR) temporal metric. A key to developing an architecture-independent temporal metric is modeling program execution time in an architecture-independent way. This is possible because well-synchronized parallel programs make coordinated progress above a certain level of granularity. Our execution time characterization takes into account barrier synchronization and critical sections. We illustrate the metric using instruction count on simple code fragments and then from multiprocessor program traces (Splash benchmarks). Results of running the benchmarks on simulated network architectures show that the IR metric for the time scale of network response predicts performance better than whole program measures.  相似文献   

9.
采用近红外光谱分析法对不同种类的苹果样品进行分类,提出一种基于非相关判别转换的苹果近红外光谱定性分析新方法。实验分别采用主成分分析、Fisher判别分析和非相关判别转换三种方法对苹果光谱数据进行特征提取,并使用K-近邻分类算法建立三种苹果分类识别模型,最后使用"留一"交叉验证法进行模型检验。结果表明,使用非相关判别转换方法建立的模型正确识别率优于使用主成分分析和Fisher判别分析建立的模型。  相似文献   

10.
In this paper, we investigate how to incorporate program complexity measures with a software quality model. We collect software complexity metrics and fault counts from each build during the testing phase of a large commercial software system. Though the data are limited in quantity, we are able to predict the number of faults in the next build. The technique we used is called times series analysis and forecasting. The methodology assumes that future predictions are based on the history of past observations. We will show that the combined complexity quality model is an improvement over the simpler quality only model. Finally, we explore how the testing process used in this development may be improved by using these predictions and suggest areas for future research.  相似文献   

11.
不相关鉴别分析是一种非常有效并起着重要作用的线性鉴别分析方法,它能抽取出具有不相关性质的特征分量。但是,由于每一个鉴别矢量的得出都要求解一个特征方程,不相关鉴别分析算法一直是计算代价很大的算法,在需求解的鉴别矢量个数较多时尤其如此。该文基于一个等效的Fisher准则函数,提出了不相关鉴别分析的另一问题模型。使用Lagrange乘子法,可求出对应该问题模型的“不相关”鉴别矢量解的简洁的表示式。关于CENPARMI手写体阿拉伯数字库和ORL人脸图象库的实验表明,该文提出的不相关鉴别分析改进算法计算效率较原算法有较大提高。  相似文献   

12.
The rising demand and cost of software have prompted researchers to investigate factors associated with program changes in software systems. Numerous program complexity measures and statistical techniques have been used to study the effects of program complexity on program changes. The effects of programming methodology on the relationship of complexity to program changes were measured in this study. The results suggest that the relationship of length and structure complexity characteristics to program changes is consistent for different programming methodologies, while the relationship of program changes and characteristics that relate to the use of data and procedure names is not consistent for different programming methodologies.  相似文献   

13.
Linear and kernel discriminant analysis are popular approaches for supervised dimensionality reduction. Uncorrelated and regularized discriminant analysis have been proposed to overcome the singularity problem encountered by classical discriminant analysis. In this paper, we study the properties of kernel uncorrelated and regularized discriminant analysis, called KUDA and KRDA, respectively. In particular, we show that under a mild condition, both linear and kernel uncorrelated discriminant analysis project samples in the same class to a common vector in the dimensionality-reduced space. This implies that uncorrelated discriminant analysis may suffer from the overfitting problem if there are a large number of samples in each class. We show that as the regularization parameter in KRDA tends to zero, KRDA approaches KUDA. This shows that KUDA is a special case of KRDA, and that regularization can be applied to overcome the overfitting problem in uncorrelated discriminant analysis. As the performance of KRDA depends on the value of the regularization parameter, we show that the matrix computations involved in KRDA can be simplified, so that a large number of candidate values can be crossvalidated efficiently. Finally, we conduct experiments to evaluate the proposed theories and algorithms.  相似文献   

14.
(Semi-)automated diagnosis of software faults can drastically increase debugging efficiency, improving reliability and time-to-market. Current automatic diagnosis techniques are predominantly of a statistical nature and, despite typical defect densities, do not explicitly consider multiple faults, as also demonstrated by the popularity of the single-fault benchmark set of programs. We present a reasoning approach, called Zoltar-M(ultiple fault), that yields multiple-fault diagnoses, ranked in order of their probability. Although application of Zoltar-M to programs with many faults requires heuristics (trading-off completeness) to reduce the inherent computational complexity, theory as well as experiments on synthetic program models and multiple-fault program versions available from the software infrastructure repository (SIR) show that for multiple-fault programs this approach can outperform statistical techniques, notably spectrum-based fault localization (SFL). As a side-effect of this research, we present a new SFL variant, called Zoltar-S(ingle fault), that is optimal for single-fault programs, outperforming all other variants known to date.  相似文献   

15.
针对过程数据具有时序相关性以及过程故障是否影响产品质量的问题,提出一种基于Bagging思想和典型变量分析(CVA)的故障检测方法(Bagging-CVA).采用Bagging思想对建模数据随机抽样构成多组新的数据集,消除数据的时序相关性.分别在每组新的数据集基于CVA方法建立过程相关和质量相关的故障检测模型,同时监测...  相似文献   

16.
This paper presents the results of a study of the software complexity characteristics of a large real-time signal processing system for which there is a 6-yr maintenance history. The objective of the study was to compare values generated by software metrics to the maintenance history in order to determine which software complexity metrics would be most useful for estimating maintenance effort. The metrics that were analyzed were program size measures, software science measures, and control flow measures. During the course of the study two new software metrics were defined. The new metrics, maximum knot depth and knots per jump ratio, are both extensions of the knot count metric. When comparing the metrics to the maintenance data the control flow measures showed the strongest positive correlation.  相似文献   

17.
Software complexity measures are quantitative estimates of the amount of effort required by a programmer to comprehend a piece of code. Many measures have been designed for standard procedural languages, but little work has been done to apply software complexity concepts to nontraditional programming paradigms. This paper presents a collection of software complexity measures that were specifically designed to quantify the conceptual complexity of rule-based programs. These measures are divided into two classes: bulk measures, which estimate complexity by examining aspects of program size, and rule measures, which gauge complexity based on the ways in which program rules interact with data and other rules. A pilot study was conducted to assess the effectiveness of these measures. Several measures were found to correlate well with the study participants' ratings of program difficulty and the time required by them to answer questions that required comprehension of program elements. The physical order of program rules was also shown to affect comprehension. The authors conclude that the development of software complexity measures for particular programming paradigms may lead to better tools for managing program development and predicting maintenance effort in nontraditional programming environments  相似文献   

18.
Concurrent processes can be used both for programming computation and for programming storage. Previous implementations of Flat GHC, however, have been tuned for computation-intensive programs, and perform poorly for storage-intensive programs (such as programs implementing reconfigurable data structures using processes and streams) and demand-driven programs. This paper proposes an optimization technique for programs in which processes are almost always suspended. The technique compiles unification for data transfer into message passing. Instead of reducing the number of process switching operations, the technique optimizes the cost of each process switching operation and reduces the number ofcons operations for data buffering. The technique is based on a mode system which is powerful enough to analyze bidirectional communication and streams of streams. The mode system is based on mode constraints imposed by individual clauses rather than on global dataflow analysis, enabling separate analysis of program modules. The introduction of a mode system into Flat GHC effectively subsets Flat GHC; the resulting language is calledModed Flat GHC. Moded Flat GHC programs enjoy two important properties under certain conditions: (1) reduction of a goal clause retains the well-modedness of the clause, and (2) when execution terminates, all the variables in an initial goal clause will be bound to ground terms. Practically, the computational complexity of all-at-once mode analysis can be made almost linear with respect to the sizen of the program and the complexity of the data structures used, and the complexity of separate analysis is higher only by O(logn) times. Mode analysis provides useful information for debugging as well. Benchmark results show that the proposed technique well improves the performance of storage-intensive programs and demand-driven programs compared with a conventional native-code implementation. It also improves the performance of some computation-intensive programs. We expect that the proposed technique will expand the application areas of concurrent logic languages.  相似文献   

19.
A study of the relationship between the cyclomatic complexity metric (T. McCabe, 1976) and software maintenance productivity, given that a metric that measures complexity should prove to be a useful predictor of maintenance costs, is reported. The cyclomatic complexity metric is a measure of the maximum number of linearly independent circuits in a program control graph. The current research validates previously raised concerns about the metric on a new data set. However, a simple transformation of the metric is investigated whereby the cyclomatic complexity is divided by the size of the system in source statements. thereby determining a complexity density ratio. This complexity density ratio is demonstrated to be a useful predictor of software maintenance productivity on a small pilot sample of maintenance projects  相似文献   

20.
孙昌爱  吴思懿  张守峰  付安 《软件学报》2024,35(6):2844-2862
BPEL (business process execution language)是一种可执行的Web服务组合语言. 与传统程序相比, BPEL程序在编程模型、执行方式等方面存在较大差异. 这些新特点使得如何定位并修改测试阶段发现的BPEL程序故障成为挑战, 面向传统软件的故障修复技术难以直接应用于BPEL程序. 从变异分析角度出发, 提出一种基于模板匹配的BPEL程序故障修复方法BPELRepair. 为了克服基于变异分析的故障修复技术计算开销高的缺点, 从补丁生成、测试用例选择以及终止条件3个角度提出多种优化策略. 开发一个BPEL故障修复支持工具, 提高故障修复的自动化程度与效率. 采用经验研究的方式, 评估所提故障修复技术及优化策略的有效性. 实验结果表明, 所提故障修复方法能够成功修复约53%的BPEL程序故障; 所提优化策略能够显著降低搜索匹配、补丁程序验证、测试用例执行与故障修复等方面的开销.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号