首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper investigates a SRGM (software reliability growth model) based on the NHPP (nonhomogeneous Poisson process) which incorporates a logistic testing-effort function. SRGM proposed in the literature consider the amount of testing-effort spent on software testing which can be depicted as an exponential curve, a Rayleigh curve, or a Weibull curve. However, it might not be appropriate to represent the consumption curve for testing-effort by one of those curves in some software development environments. Therefore, this paper shows that a logistic testing-effort function can be expressed as a software-development/test-effort curve and that it gives a good predictive capability based on real failure-data. Parameters are estimated, and experiments performed on actual test/debug data sets. Results from applications to a real data set are analyzed and compared with other existing models to show that the proposed model predicts better. In addition, an optimal software release policy for this model, based on cost-reliability criteria, is proposed  相似文献   

2.
Over the last several decades, many Software Reliability Growth Models (SRGM) have been developed to greatly facilitate engineers and managers in tracking and measuring the growth of reliability as software is being improved. However, some research work indicates that the delayed S-shaped model may not fit the software failure data well when the testing-effort spent on fault detection is not a constant. Thus, in this paper, we first review the logistic testing-effort function that can be used to describe the amount of testing-effort spent on software testing. We describe how to incorporate the logistic testing-effort function into both exponential-type, and S-shaped software reliability models. The proposed models are also discussed under both ideal, and imperfect debugging conditions. Results from applying the proposed models to two real data sets are discussed, and compared with other traditional SRGM to show that the proposed models can give better predictions, and that the logistic testing-effort function is suitable for incorporating directly into both exponential-type, and S-shaped software reliability models  相似文献   

3.
We consider two kinds of software testing-resource allocation problems. The first problem is to minimize the number of remaining faults given a fixed amount of testing-effort, and a reliability objective. The second problem is to minimize the amount of testing-effort given the number of remaining faults, and a reliability objective. We have proposed several strategies for module testing to help software project managers solve these problems, and make the best decisions. We provide several systematic solutions based on a nonhomogeneous Poisson process model, allowing systematic allocation of a specified amount of testing-resource expenditures for each software module under some constraints. We describe several numerical examples on the optimal testing-resource allocation problems to show applications & impacts of the proposed strategies during module testing. Experimental results indicate the advantages of the approaches we proposed in guiding software engineers & project managers toward best testing resource allocation in practice. Finally, an extensive sensitivity analysis is presented to investigate the effects of various principal parameters on the optimization problem of testing-resource allocation. The results can help us know which parameters have the most significant influence, and the changes of optimal testing-effort expenditures affected by the variations of fault detection rate & expected initial faults.  相似文献   

4.
An improved software reliability release policy is presented, based on the nonhomogeneous Poisson process (NHPP) and incorporating the effect of testing effort. Testing effort functions are modelled by exponential, Rayleigh and Weibull curves. The optimal software release time is determined by minimizing the total expected software cost under the conditions of satisfying a software reliability objective. Numerical examples have been included to illustrate the software release policy.  相似文献   

5.
We discuss a software reliability growth model with testing-effort based on a nonhomogeneous Poisson process and its application to a testing-effort control problem. The time-dependent behaviour of testing-effort expenditures which is incorporated into software reliability growth is expressed by a Weibull curve due to the flexibility in describing a number of testing-effort expenditure patterns. Using several sets of actual software error data, the model fitting and examples of a testing-effort control problem are illustrated.  相似文献   

6.
This paper presents a new methodology for predicting software reliability in the field environment. Our work differs from some existing models that assume a constant failure detection rate for software testing and field operation environments, as this new methodology considers the random environmental effects on software reliability. Assuming that all the random effects of the field environments can be captured by a unit-free environmental factor,$eta$, which is modeled as a random-distributed variable, we establish a generalized random field environment (RFE) software reliability model that covers both the testing phase and the operating phase in the software development cycle. Based on the generalized RFE model, two specific random field environmental reliability models are proposed for predicting software reliability in the field environment: the$gamma$-RFE model, and the$beta$-RFE model. A set of software failure data from a telecommunication software application is used to illustrate the proposed models, both of which provide very good fittings to the software failures in both testing and operation environments. This new methodology provides a viable way to model the user environments, and further makes adjustments to the reliability prediction for similar software products. Based on the generalized software reliability model, further work may include the development of software cost models and the optimum software release policies under random field environments.  相似文献   

7.
This paper presents a NHPP-based SRGM (software reliability growth model) for NVP (N-version programming) systems (NVP-SRGM) based on the NHPP (nonhomogeneous Poisson process). Although many papers have been devoted to modeling NVP-system reliability, most of them consider only the stable reliability, i.e., they do not consider the reliability growth in NVP systems due to continuous removal of faults from software versions. The model in this paper is the first reliability-growth model for NVP systems which considers the error-introduction rate and the error-removal efficiency. During testing and debugging, when a software fault is found, a debugging effort is devoted to remove this fault. Due to the high complexity of the software, this fault might not be successfully removed, and new faults might be introduced into the software. By applying a generalized NHPP model into the NVP system, a new NVP-SRGM is established, in which the multi-version coincident failures are well modeled. A simplified software control logic for a water-reservoir control system illustrates how to apply this new software reliability model. The s-confidence bounds are provided for system-reliability estimation. This software reliability model can be used to evaluate the reliability and to predict the performance of NVP systems. More application is needed to validate fully the proposed NVP-SRGM for quantifying the reliability of fault-tolerant software systems in a general industrial setting. As the first model of its kind in NVP reliability-growth modeling, the proposed NVP SRGM can be used to overcome the shortcomings of the independent reliability model. It predicts the system reliability more accurately than the independent model and can be used to help determine when to stop testing, which is a key question in the testing and debugging phase of the NVP system-development life cycle  相似文献   

8.
Software Reliability Growth Models with Testing-Effort   总被引:1,自引:0,他引:1  
Many software reliability growth models have been proposed in the past decade. Those models tacitly assume that testing-effort expenditures are constant throughout software testing. This paper develops realistic software reliability growth models incorporating the effect of testing-effort. The software error detection phenomenon in software testing is modeled by a nonhomogeneous Poisson process. The software reliability assessment measures and the estimation methods of parameters are investigated. Testing-effort expenditures are described by exponential and Rayleigh curves. Least-squares estimators and maximum likelihood estimators are used for the reliability growth parameters. The software reliability data analyses use actual data. The software reliability growth models with testing-effort can consider the relationship between the software reliability growth and the effect of testing-effort. Thus, the proposed models will enable us to evaluate software reliability more realistically.  相似文献   

9.
A large number of software reliability growth models have been proposed to analyse the reliability of a software application based on the failure data collected during the testing phase of the application. To ensure analytical tractability, most of these models are based on simplifying assumptions of instantaneous & perfect debugging. As a result, the estimates of the residual number of faults, failure rate, reliability, and optimal software release time obtained from these models tend to be optimistic. To obtain realistic estimates, it is desirable that the assumptions of instantaneous & perfect debugging be amended. In this paper we discuss the various policies according to which debugging may be conducted. We then describe a rate-based simulation framework to incorporate explicit debugging activities, which may be conducted according to the different debugging policies, into software reliability growth models. The simulation framework can also consider the possibility of imperfect debugging in conjunction with any of the debugging policies. Further, we also present a technique to compute the failure rate, and the reliability of the software, taking into consideration explicit debugging. An economic cost model to determine the optimal software release time in the presence of debugging activities is also described. We illustrate the potential of the simulation framework using two case studies.  相似文献   

10.
A generic model of equipment availability under imperfect maintenance   总被引:2,自引:0,他引:2  
This paper explores the impact of imperfect repair on the availability of repairable equipment. Kijima's first virtual age model is used to describe the imperfect repair process. Due to the complexity of the underlying assumptions of this model, we are unable to derive a closed-form equation for availability. Therefore, simulation modeling & analysis are used to evaluate equipment availability. Based on initial availability plots, a generic availability function is proposed. A 2/sup 3/ factorial experiment is performed to evaluate the accuracy of this model. The maximum absolute error between the simulation output, and the corresponding values of the availability function is 3.82%. This indicates that our proposed function provides a reasonable approximation of equipment availability, which simplifies meaningful analysis for the unit. Therefore, a method is defined for determining optimum equipment replacement intervals based on average cost. Next, meta-models are developed to convert equipment reliability & maintainability parameters into the coefficients of the availability model. We expand on our initial experiment using a circumscribed central composite experimental design. We evaluate the accuracy of the meta-models for the 15 experiments & 50 random experiments within the design space. For the 50 new experiments, we compare the replacement policy obtained from analysis of the meta-model to the policy obtained directly from the simulation output. The average increase in cost resulting from the sub-optimal replacement policy is only 0.10%. Therefore, we conclude that the meta-models are robust, and provide good estimates of the parameters of our proposed availability function. By doing this, we eliminate the need to perform simulation to obtain the parameters of the availability model.  相似文献   

11.
Software reliability modeling & estimation plays a critical role in software development, particularly during the software testing stage. Although there are many research papers on this subject, few of them address the realistic time delays between fault detection and fault correction processes. This paper investigates an approach to incorporate the time dependencies between the fault detection, and fault correction processes, focusing on the parameter estimations of the combined model. Maximum likelihood estimates of combined models are derived from an explicit likelihood formula under various time delay assumptions. Various characteristics of the combined model, like the predictive capability, are also analyzed, and compared with the traditional least squares estimation method. Furthermore, we study a direct, useful application of the proposed model & estimation method to the classical optimal release time problem faced by software decision makers. The results illustrate the effect of time delay on the optimal release policy, and the overall software development cost.  相似文献   

12.
The nonhomogeneous error detection rate model has been extensively used in software reliability modelling. An important management responsibility is to make a decision about the release of software, so as to result in maximum cost effectiveness. It is well known that the effort for correction of an error increases, rather heavily, from an initial testing phase to a final testing phase and then to the operational phase. In this paper, a method is presented to systematically determine this optimum release instant. The fact that some faults can be regenerated during the process of correction has also been considered in the modelling—this has been ignored in previous literature. Also, the partition of the testing phase into initial and final phases is considered to be desirable as effort per error correction will be significantly different in these two phases. An example illustrates the entire procedure for various values of the total number of errors and the trends of cost and release time.  相似文献   

13.
Software reliability measurement during the testing phase is essential for examining the degree of quality or reliability of a developed software system. A software-reliability growth model incorporating the amount of test effort expended during the software testing phase is developed. The time-dependent behavior of test-effort expenditures is described by a Weibull curve. Assuming that the error detection rate to the amount of test effort spent during the testing phase is proportional to the current error content, the model is formulated by a nonhomogeneous Poission process. Using the model, the method of data analysis for software reliability measurement is developed. This model is applied to the prediction of additional test-effort expenditures to achieve the objective number of errors detected by software testing, and the determination of the optimum time to stop software testing for release  相似文献   

14.
In some applications, the failure rate of the system depends not only on the time, but also upon the status of the system, such as vibration level, efficiency, number of random shocks on the system, etc., which causes degradation. In this paper, we develop a generalized condition-based maintenance model subject to multiple competing failure processes including two degradation processes, and random shocks. An average long-run maintenance cost rate function is derived based on the expressions for the degradation paths & cumulative shock damage, which are measurable. A geometric sequence is employed to develop the inter-inspection sequence. Upon inspection, one needs to decide whether to perform a maintenance, such as preventive or corrective, or to do nothing. The preventive maintenance thresholds for degradation processes & inspection sequences are the decision variables of the proposed model. We also present an algorithm based on the Nelder-Mead downhill simplex method to calculate the optimum policy that minimizes the average long-run maintenance cost rate. Numerical examples are given to illustrate the results using the optimization algorithm.  相似文献   

15.
Over the past 30 years, many software reliability growth models (SRGM) have been proposed. Often, it is assumed that detected faults are immediately corrected when mathematical models are developed. This assumption may not be realistic in practice because the time to remove a detected fault depends on the complexity of the fault, the skill and experience of personnel, the size of debugging team, the technique(s) being used, and so on. During software testing, practical experiences show that mutually independent faults can be directly detected and removed, but mutually dependent faults can be removed iff the leading faults have been removed. That is, dependent faults may not be immediately removed, and the fault removal process lags behind the fault detection process. In this paper, we will first give a review of fault detection & correction processes in software reliability modeling. We will then illustrate the fact that detected faults cannot be immediately corrected with several examples. We also discuss the software fault dependency in detail, and study how to incorporate both fault dependency and debugging time lag into software reliability modeling. The proposed models are fairly general models that cover a variety of known SRGM under different conditions. Numerical examples are presented, and the results show that the proposed framework to incorporate both fault dependency and debugging time lag for SRGM has a better prediction capability. In addition, an optimal software release policy for the proposed models, based on cost-reliability criterion, is proposed. The main purpose is to minimize the cost of software development when a desired reliability objective is given.  相似文献   

16.
A general software reliability model based on the nonhomogeneous Poisson process (NHPP) is used to derive a model that integrates imperfect debugging with the learning phenomenon. Learning occurs if testing appears to improve dynamically in efficiency as one progresses through a testing phase. Learning usually manifests itself as a changing fault-detection rate. Published models and empirical data suggest that efficiency growth due to learning can follow many growth-curves, from linear to that described by the logistic function. On the other hand, some recent work indicates that in a real industrial resource-constrained environment, very little actual learning might occur because nonoperational profiles used to generate test and business models can prevent the learning. When that happens, the testing efficiency can still change when an explicit change in testing strategy occurs, or it can change as a result of the structural profile of the code under test and test-case ordering  相似文献   

17.
This paper proposes a new scheme for constructing software reliability growth models (SRGM) based on a nonhomogeneous Poisson process (NHPP). The main focus is to provide an efficient parametric decomposition method for software reliability modeling, which considers both testing efforts and fault detection rates (FDR). In general, the software fault detection/removal mechanisms depend on previously detected/removed faults and on how testing efforts are used. From practical field studies, it is likely that we can estimate the testing efforts consumption pattern and predict the trends of FDR. A set of time-variable, testing-effort-based FDR models were developed that have the inherent flexibility of capturing a wide range of possible fault detection trends: increasing, decreasing, and constant. This scheme has a flexible structure and can model a wide spectrum of software development environments, considering various testing efforts. The paper describes the FDR, which can be obtained from historical records of previous releases or other similar software projects, and incorporates the related testing activities into this new modeling approach. The applicability of our model and the related parametric decomposition methods are demonstrated through several real data sets from various software projects. The evaluation results show that the proposed framework to incorporate testing efforts and FDR for SRGM has a fairly accurate prediction capability and it depicts the real-life situation more faithfully. This technique can be applied to wide range of software systems  相似文献   

18.
In this paper, we discuss a software reliability growth model with a learning factor for imperfect debugging based on a non-homogeneous Poisson process (NHPP). Parameters used in the model are estimated. An optimal release policy is obtained for a software system based on the total mean profit and reliability criteria. A random software life-cycle is also incorporated in the discussion. Numerical results are presented in the final section.  相似文献   

19.
Optimal software release policies, which decide when to stop testing a software system and transfer it to the user, are discussed for a software reliability growth model with nonhomogeneous error detection rate. Two evaluation criteria, i.e. total expected software cost and software reliability, are introduced. The optimal software release policies respectively derived for each criterion and those simultaneously evaluating both criteria are discussed. Numerical examples on those optimal software release policies are presented for illustration.  相似文献   

20.
软件测试覆盖率直观地描述了软件测试的程度,现有的基于测试覆盖率的软件可靠性增长模型绝大多数都没有考虑故障的排除效率.论文把软件测试覆盖率和故障排除效率引入到软件可靠性评估过程中,建立了一个既考虑测试覆盖率,又考虑故障排除效率的非齐次泊松过程类软件可靠性增长模型,在一组失效数据上的实验分析表明:对这组失效数据,论文提出的模型比其他一些非齐次泊松过程类模型的拟合效果更好.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号