首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 69 毫秒
1.
This paper proposes a new scheme for constructing software reliability growth models (SRGM) based on a nonhomogeneous Poisson process (NHPP). The main focus is to provide an efficient parametric decomposition method for software reliability modeling, which considers both testing efforts and fault detection rates (FDR). In general, the software fault detection/removal mechanisms depend on previously detected/removed faults and on how testing efforts are used. From practical field studies, it is likely that we can estimate the testing efforts consumption pattern and predict the trends of FDR. A set of time-variable, testing-effort-based FDR models were developed that have the inherent flexibility of capturing a wide range of possible fault detection trends: increasing, decreasing, and constant. This scheme has a flexible structure and can model a wide spectrum of software development environments, considering various testing efforts. The paper describes the FDR, which can be obtained from historical records of previous releases or other similar software projects, and incorporates the related testing activities into this new modeling approach. The applicability of our model and the related parametric decomposition methods are demonstrated through several real data sets from various software projects. The evaluation results show that the proposed framework to incorporate testing efforts and FDR for SRGM has a fairly accurate prediction capability and it depicts the real-life situation more faithfully. This technique can be applied to wide range of software systems  相似文献   

2.
A software reliability model presented here assumes a time-dependent failure rate and that debugging can remove as well as add faults with a nonzero probability. Based on these assumptions, the expected number of faults and mean standard error of the estimated faults remaining in the system are derived. The model treats the capability of correcting errors as a random process under which most of the existing software reliability models become special cases of this proposed one. It, therefore, serves to realize a competing risk problem and to unify much of the current software reliability theory. The model deals with the nonindependence of error correction and should be extremely valuable for a large-scale software project.  相似文献   

3.
Over the past 30 years, many software reliability growth models (SRGM) have been proposed. Often, it is assumed that detected faults are immediately corrected when mathematical models are developed. This assumption may not be realistic in practice because the time to remove a detected fault depends on the complexity of the fault, the skill and experience of personnel, the size of debugging team, the technique(s) being used, and so on. During software testing, practical experiences show that mutually independent faults can be directly detected and removed, but mutually dependent faults can be removed iff the leading faults have been removed. That is, dependent faults may not be immediately removed, and the fault removal process lags behind the fault detection process. In this paper, we will first give a review of fault detection & correction processes in software reliability modeling. We will then illustrate the fact that detected faults cannot be immediately corrected with several examples. We also discuss the software fault dependency in detail, and study how to incorporate both fault dependency and debugging time lag into software reliability modeling. The proposed models are fairly general models that cover a variety of known SRGM under different conditions. Numerical examples are presented, and the results show that the proposed framework to incorporate both fault dependency and debugging time lag for SRGM has a better prediction capability. In addition, an optimal software release policy for the proposed models, based on cost-reliability criterion, is proposed. The main purpose is to minimize the cost of software development when a desired reliability objective is given.  相似文献   

4.
Musa  J.D. 《Spectrum, IEEE》1989,26(2):39-42
The author discusses a measure of software reliability and various models for characterizing it, the result of 15 years of theoretical research and experimental application, which are moving into practice and starting to pay off. These tools let developers quantify reliability, give them ways to predict how reliability will vary as testing progresses, and help them use that information to decide when to release software. He examines the distinction between failures and faults and how these affect reliability. He compares execution-time models with calendar-time models, which are less effective, and discusses the choice of execution-time models. The author then describes a generic, step-by-step procedure to guide software reliability engineers in using the reliability models  相似文献   

5.
A method is presented for software reliability estimation that is input-domain based. It was developed to overcome some of the difficulties in using existing reliability models in critical applications. The method classifies the faults that could be in a software program. Then it accounts for the distribution over the input-domain of input values which could activate each fault-type. The method assumes that these distributions change, by reducing their extent, with the number of test cases correctly executed. Using a simple example, the paper suggests a convenient fault classification and a choice of distributions for each fault-type. The introduction of the distributions permit better use of the information collected during the testing phase  相似文献   

6.
A large number of software reliability growth models have been proposed to analyse the reliability of a software application based on the failure data collected during the testing phase of the application. To ensure analytical tractability, most of these models are based on simplifying assumptions of instantaneous & perfect debugging. As a result, the estimates of the residual number of faults, failure rate, reliability, and optimal software release time obtained from these models tend to be optimistic. To obtain realistic estimates, it is desirable that the assumptions of instantaneous & perfect debugging be amended. In this paper we discuss the various policies according to which debugging may be conducted. We then describe a rate-based simulation framework to incorporate explicit debugging activities, which may be conducted according to the different debugging policies, into software reliability growth models. The simulation framework can also consider the possibility of imperfect debugging in conjunction with any of the debugging policies. Further, we also present a technique to compute the failure rate, and the reliability of the software, taking into consideration explicit debugging. An economic cost model to determine the optimal software release time in the presence of debugging activities is also described. We illustrate the potential of the simulation framework using two case studies.  相似文献   

7.
A two-component predictability measure that characterizes the long-term predictive capability of a model is presented. One component, average error, measures how well a model predicts throughout the testing phase. The other component, average bias, measures the general tendency to overestimate or underestimate the number of faults. Data sets for both large and small projects from diverse sources with various initial fault density ranges have been analyzed. The results show that: (i) the logarithmic model seems to predict well in most data sets, (ii) the inverse polynomial model can be used as the next alternative, and (iii) the delayed S-shaped model, which in some data sets fit well generally performed poorly. The statistical analysis shows that these models have appreciably different predictive capabilities  相似文献   

8.
This paper presents a NHPP-based SRGM (software reliability growth model) for NVP (N-version programming) systems (NVP-SRGM) based on the NHPP (nonhomogeneous Poisson process). Although many papers have been devoted to modeling NVP-system reliability, most of them consider only the stable reliability, i.e., they do not consider the reliability growth in NVP systems due to continuous removal of faults from software versions. The model in this paper is the first reliability-growth model for NVP systems which considers the error-introduction rate and the error-removal efficiency. During testing and debugging, when a software fault is found, a debugging effort is devoted to remove this fault. Due to the high complexity of the software, this fault might not be successfully removed, and new faults might be introduced into the software. By applying a generalized NHPP model into the NVP system, a new NVP-SRGM is established, in which the multi-version coincident failures are well modeled. A simplified software control logic for a water-reservoir control system illustrates how to apply this new software reliability model. The s-confidence bounds are provided for system-reliability estimation. This software reliability model can be used to evaluate the reliability and to predict the performance of NVP systems. More application is needed to validate fully the proposed NVP-SRGM for quantifying the reliability of fault-tolerant software systems in a general industrial setting. As the first model of its kind in NVP reliability-growth modeling, the proposed NVP SRGM can be used to overcome the shortcomings of the independent reliability model. It predicts the system reliability more accurately than the independent model and can be used to help determine when to stop testing, which is a key question in the testing and debugging phase of the NVP system-development life cycle  相似文献   

9.
We consider two kinds of software testing-resource allocation problems. The first problem is to minimize the number of remaining faults given a fixed amount of testing-effort, and a reliability objective. The second problem is to minimize the amount of testing-effort given the number of remaining faults, and a reliability objective. We have proposed several strategies for module testing to help software project managers solve these problems, and make the best decisions. We provide several systematic solutions based on a nonhomogeneous Poisson process model, allowing systematic allocation of a specified amount of testing-resource expenditures for each software module under some constraints. We describe several numerical examples on the optimal testing-resource allocation problems to show applications & impacts of the proposed strategies during module testing. Experimental results indicate the advantages of the approaches we proposed in guiding software engineers & project managers toward best testing resource allocation in practice. Finally, an extensive sensitivity analysis is presented to investigate the effects of various principal parameters on the optimization problem of testing-resource allocation. The results can help us know which parameters have the most significant influence, and the changes of optimal testing-effort expenditures affected by the variations of fault detection rate & expected initial faults.  相似文献   

10.
Effect of code coverage on software reliability measurement   总被引:1,自引:0,他引:1  
Existing software reliability-growth models often over-estimate the reliability of a given program. Empirical studies suggest that the over-estimations exist because the models do not account for the nature of the testing. Every testing technique has a limit to its ability to reveal faults in a given system. Thus, as testing continues in its region of saturation, no more faults are discovered and inaccurate reliability-growth phenomena are predicted from the models. This paper presents a technique intended to solve this problem, using both time and code coverage measures for the prediction of software failures in operation. Coverage information collected during testing is used only to consider the effective portion of the test data. Execution time between test cases, which neither increases code coverage nor causes a failure, is reduced by a parameterized factor. Experiments were conducted to evaluate this technique, on a program created in a simulated environment with simulated faults, and on two industrial systems that contained tenths of ordinary faults. Two well-known reliability models, Goel-Okumoto and Musa-Okumoto, were applied to both the raw data and to the data adjusted using this technique. Results show that over-estimation of reliability is properly corrected in the cases studied. This new approach has potential, not only to achieve more accurate applications of software reliability models, but to reveal effective ways of conducting software testing  相似文献   

11.
This note provides an alternative formulation of the software reliability models of Jelinski-Moranda and Littlewood. The formulation is in terms of failure times rather than interfailure times; the models are then equivalent to observing the first n order statistics (n is random) from a random sample of size N. The models can be generalized by using a decreasing failure rate for the failure times. For the Jelinski-Moranda model, we comment on the maximum likelihood estimate and an improved estimate for the initial number of faults in the software. We discuss how to check the validity of the Jelinski-Moranda model.  相似文献   

12.
Two kinds of software-testing management problems are considered: testing-resource allocation to best use specified testing resources during module testing, and a testing-resource control problem concerning how to spend the allocated amount of testing-resource expenditures during it. A software reliability growth model based on a nonhomogeneous Poisson process is introduced. The model describes the time-dependent behavior of software errors detected and testing-resource expenditures spent during the testing. The optimal allocation and control of testing resources among software modules can improve reliability and shorten the testing stage. Based on the model, numerical examples of these two software testing management problems are presented  相似文献   

13.
Count Models for Software Quality Estimation   总被引:1,自引:0,他引:1  
Identifying which software modules, during the software development process, are likely to be faulty is an effective technique for improving software quality. Such an approach allows a more focused software quality & reliability enhancement endeavor. The development team may also like to know the number of faults that are likely to exist in a given program module, i.e., a quantitative quality prediction. However, classification techniques such as the logistic regression model (lrm) cannot be used to predict the number of faults. In contrast, count models such as the Poisson regression model (prm), and the zero-inflated Poisson (zip) regression model can be used to obtain both a qualitative classification, and a quantitative prediction for software quality. In the case of the classification models, a classification rule based on our previously developed generalized classification rule is used. In the context of count models, this study is the first to propose a generalized classification rule. Case studies of two industrial software systems are examined, and for each we developed two count models, (prm, and zip), and a classification model (lrm). Evaluating the predictive capabilities of the models, we concluded that the prm, and the zip models have similar classification accuracies as the lrm. The count models are also used to predict the number of faults for the two case studies. The zip model yielded better fault prediction accuracy than the prm. As compared to other quantitative prediction models for software quality, such as multiple linear regression (mlr), the prm, and zip models have a unique property of yielding the probability that a given number of faults will occur in any module  相似文献   

14.
Two models of software testing, developed previously by T. Downs (see IEEE Trans. Software Eng., vol.SE-11, no.4, p.375-86, 1985, and IEEE Trans. Software Eng., vol.SE-12, no.9, p.979-87, 1986), are generalized to incorporate a greater degree of realism. This generalization leads to three new models. A fourth model, which has been developed using a different line of reasoning, is also presented. The performance of these models as reliability predictors is assessed by applying them to 16 sets of failure data collected from various software development projects. Comparisons of performance are made with the two earlier models and with two variants of the model of B. Littlewood and L. Verall (1973). Three distinct measurers of performance are employed. The performance of the new models is generally superior to that of the older models with one model showing outstanding performance under all three measures  相似文献   

15.
This paper combines two distinct areas of research, namely software reliability growth modeling, and efficacy studies on software testing methods. It begins by proposing two software reliability growth models with a new approach to modeling. These models make the basic assumption that the intensity of failure occurrence during the testing phase of a piece of software is proportional to the s-expected probability of selecting a failure-causing input. The first model represents random testing, and the second model represents partition testing. These models provide the s-expected number of failures over a period, which in turn is used in analyzing the failure detection abilities of testing strategies. The specific areas of investigation are *) conditions that enable partition testing yielding optimal results, and) comparison between partition testing and random testing in terms of efficacy  相似文献   

16.
This paper presents a new methodology for predicting software reliability in the field environment. Our work differs from some existing models that assume a constant failure detection rate for software testing and field operation environments, as this new methodology considers the random environmental effects on software reliability. Assuming that all the random effects of the field environments can be captured by a unit-free environmental factor,$eta$, which is modeled as a random-distributed variable, we establish a generalized random field environment (RFE) software reliability model that covers both the testing phase and the operating phase in the software development cycle. Based on the generalized RFE model, two specific random field environmental reliability models are proposed for predicting software reliability in the field environment: the$gamma$-RFE model, and the$beta$-RFE model. A set of software failure data from a telecommunication software application is used to illustrate the proposed models, both of which provide very good fittings to the software failures in both testing and operation environments. This new methodology provides a viable way to model the user environments, and further makes adjustments to the reliability prediction for similar software products. Based on the generalized software reliability model, further work may include the development of software cost models and the optimum software release policies under random field environments.  相似文献   

17.
An assumption commonly made in early models of software reliability is that the failure rate of a program is a constant multiple of the (unknown) number of faults remaining. This implies that all faults contribute the same amount to the failure rate of the program. The assumption is challenged and an alternative proposed. The suggested model results in earlier fault-fixes having a greater effect than later ones (the faults which make the greatest contribution to the overall failure rate tend to show themselves earlier, and so are fixed earlier), and the DFR property between fault fixes (assurance about programs increases during periods of failure-free operation, as well as at fault fixes). The model is tractable and allows a variety of reliability measures to be calculated. Predictions of total execution time to achieve a target reliability, and total number of fault fixes to target reliability, are obtained. The model might also apply to hardware reliability growth resulting from the elimination of design errors.  相似文献   

18.
19.
Count models, such as the Poisson regression model, and the negative binomial regression model, can be used to obtain software fault predictions. With the aid of such predictions, the development team can improve the quality of operational software. The zero-inflated, and hurdle count models may be more appropriate when, for a given software system, the number of modules with faults are very few. Related literature lacks quantitative guidance regarding the application of count models for software quality prediction. This study presents a comprehensive empirical investigation of eight count models in the context of software fault prediction. It includes comparative hypothesis testing, model selection, and performance evaluation for the count models with respect to different criteria. The case study presented is that of a full-scale industrial software system. It is observed that the information obtained from hypothesis testing, and model selection techniques was not consistent with the predictive performances of the count models. Moreover, the comparative analysis based on one criterion did not match that of another criterion. However, with respect to a given criterion, the performance of a count model is consistent for both the fit, and test data sets. This ensures that, if a fitted model is considered good based on a given criterion, then the model will yield a good prediction based on the same criterion. The relative performances of the eight models are evaluated based on a one-way anova model, and Tukey's multiple comparison technique. The comparative study is useful in selecting the best count model for estimating the quality of a given software system  相似文献   

20.
The nonhomogeneous error detection rate model has been extensively used in software reliability modelling. An important management responsibility is to make a decision about the release of software, so as to result in maximum cost effectiveness. It is well known that the effort for correction of an error increases, rather heavily, from an initial testing phase to a final testing phase and then to the operational phase. In this paper, a method is presented to systematically determine this optimum release instant. The fact that some faults can be regenerated during the process of correction has also been considered in the modelling—this has been ignored in previous literature. Also, the partition of the testing phase into initial and final phases is considered to be desirable as effort per error correction will be significantly different in these two phases. An example illustrates the entire procedure for various values of the total number of errors and the trends of cost and release time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号