首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
软件测试覆盖率直观地描述了软件测试的程度,现有的基于测试覆盖率的软件可靠性增长模型绝大多数都没有考虑故障的排除效率.论文把软件测试覆盖率和故障排除效率引入到软件可靠性评估过程中,建立了一个既考虑测试覆盖率,又考虑故障排除效率的非齐次泊松过程类软件可靠性增长模型,在一组失效数据上的实验分析表明:对这组失效数据,论文提出的模型比其他一些非齐次泊松过程类模型的拟合效果更好.  相似文献   

2.
A stochastic model (G-O) for the software failure phenomenon based on a nonhomogeneous Poisson process (NHPP) was suggested by Goel and Okumoto (1979). This model has been widely used but some important work remains undone on estimating the parameters. The authors present a necessary and sufficient condition for the likelihood estimates to be finite, positive, and unique. A modification of the G-O model is suggested. The performance measures and parametric inferences of the new model are discussed. The results of the new model are applied to real software failure data and compared with G-O and Jelinski-Moranda models  相似文献   

3.
We summarize the reliability growth models for hardware and software systems described by a stochastic process, where the underlying stochastic process is assumed to be a nonhomogeneous Poisson process (NHPP). The background of reliability growth modelling based on an NHPP is surveyed. The Duane model, which was first postulated as a reliability growth model and is commonly used, is first explained. Secondly, the Weibull growth and modified Weibull growth models for hardware systems and the exponential type growth and gamma type growth models for error detection for software systems are discussed. The parameter estimates can be obtained by maximum likelihood estimation. Finally, the goodness-of-fit tests based on chi-square, Cramér-von Mises and Kolmogorov-Smirnov statistics are presented for the reliability growth models based on an NHPP.  相似文献   

4.
In this paper we give a general Markov process formulation for a software reliability model and present expressions for software performance measures. We discuss a general model and derive the maximum likelihood estimates for the required parameters of this model. The generality of this model is demonstrated by showing that the Jelinski-Moranda model and the Non-Homogeneous Poisson Process (NHPP) model are both very special cases of our model. In this process we also correct some errors in a previous paper of the NHPP model.  相似文献   

5.
In this paper, we discuss a software reliability growth model with a learning factor for imperfect debugging based on a non-homogeneous Poisson process (NHPP). Parameters used in the model are estimated. An optimal release policy is obtained for a software system based on the total mean profit and reliability criteria. A random software life-cycle is also incorporated in the discussion. Numerical results are presented in the final section.  相似文献   

6.
论述了运用两种NHPP增长型模型进行某测控系统软件可靠性预计的方法,以及它们的数学解析式,并阐明了两者的关系。针对某发射测控系统软件的测试调试过程,初步估计了软件的程序窖量。根据所得到的数据。运用两种NHPP模型计算了模型参数的估计值。预测了软件的可靠性水平和需要进行的各项软件测试的时间。  相似文献   

7.
Little work has been done on extending existing models with imperfect debugging to the more realistic situation where new faults are generated from unsuccessful attempts at removing faults completely. This paper presents a software-reliability growth model which incorporates the possibility of introducing new faults into a software system due to the imperfect debugging of the original faults in the system. The original faults manifest themselves as primary failures and are assumed to be distributed as a nonhomogeneous Poisson process (NHPP). Imperfect debugging of each primary failure induces a secondary failure which is assumed to occur in a delayed sense from the occurrence time of the primary failure. The mean total number of failures, comprising the primary and secondary failures, is obtained. The authors also develop a cost model and consider some optimal release-policies based on the model. Parameters are estimated using maximum likelihood and a numerical example is presented  相似文献   

8.
An improved software reliability release policy is presented, based on the nonhomogeneous Poisson process (NHPP) and incorporating the effect of testing effort. Testing effort functions are modelled by exponential, Rayleigh and Weibull curves. The optimal software release time is determined by minimizing the total expected software cost under the conditions of satisfying a software reliability objective. Numerical examples have been included to illustrate the software release policy.  相似文献   

9.
We develop a moving average non-homogeneous Poisson process (MA NHPP) reliability model which includes the benefits of both time domain, and structure based approaches. This method overcomes the deficiency of existing NHPP techniques that fall short of addressing repair, and internal system structures simultaneously. Our solution adopts a MA approach to cover both methods, and is expected to improve reliability prediction. This paradigm allows software components to vary in nature, and can account for system structures due to its ability to integrate individual component reliabilities on an execution path. Component-level modeling supports sensitivity analysis to guide future upgrades, and updates. Moreover, the integration capability is a benefit for incremental software development, meaning only the affected portion needs to be re-evaluated instead of the entire package, facilitating software evolution to a higher extent than with other methods. Several experiments on different system scenarios and circumstances are discussed, indicating the usefulness of our approach.  相似文献   

10.
This paper investigates a SRGM (software reliability growth model) based on the NHPP (nonhomogeneous Poisson process) which incorporates a logistic testing-effort function. SRGM proposed in the literature consider the amount of testing-effort spent on software testing which can be depicted as an exponential curve, a Rayleigh curve, or a Weibull curve. However, it might not be appropriate to represent the consumption curve for testing-effort by one of those curves in some software development environments. Therefore, this paper shows that a logistic testing-effort function can be expressed as a software-development/test-effort curve and that it gives a good predictive capability based on real failure-data. Parameters are estimated, and experiments performed on actual test/debug data sets. Results from applications to a real data set are analyzed and compared with other existing models to show that the proposed model predicts better. In addition, an optimal software release policy for this model, based on cost-reliability criteria, is proposed  相似文献   

11.
12.
An S-shaped software reliability growth model (SRGM) based on a non-homogeneous Poisson process (NHPP) with two types of errors has been proposed. The errors have been classified depending upon their severity. We have estimated the model parameters and obtained the optimum release policies which minimize the cost subject to achieving a given level of reliability. Numerical results illustrating the applicability of the proposed model are also presented.  相似文献   

13.
This paper presents a NHPP-based SRGM (software reliability growth model) for NVP (N-version programming) systems (NVP-SRGM) based on the NHPP (nonhomogeneous Poisson process). Although many papers have been devoted to modeling NVP-system reliability, most of them consider only the stable reliability, i.e., they do not consider the reliability growth in NVP systems due to continuous removal of faults from software versions. The model in this paper is the first reliability-growth model for NVP systems which considers the error-introduction rate and the error-removal efficiency. During testing and debugging, when a software fault is found, a debugging effort is devoted to remove this fault. Due to the high complexity of the software, this fault might not be successfully removed, and new faults might be introduced into the software. By applying a generalized NHPP model into the NVP system, a new NVP-SRGM is established, in which the multi-version coincident failures are well modeled. A simplified software control logic for a water-reservoir control system illustrates how to apply this new software reliability model. The s-confidence bounds are provided for system-reliability estimation. This software reliability model can be used to evaluate the reliability and to predict the performance of NVP systems. More application is needed to validate fully the proposed NVP-SRGM for quantifying the reliability of fault-tolerant software systems in a general industrial setting. As the first model of its kind in NVP reliability-growth modeling, the proposed NVP SRGM can be used to overcome the shortcomings of the independent reliability model. It predicts the system reliability more accurately than the independent model and can be used to help determine when to stop testing, which is a key question in the testing and debugging phase of the NVP system-development life cycle  相似文献   

14.
Bootstrap methods are presented for constructing s-confidence regions for the s-expected ROCOF (rate of occurrence of failures) of a repairable system. This is based on the work of Cowling, et al. (1996) for the intensity function of a NHPP (nonhomogeneous Poisson process). The method is applied to the operating times of unscheduled maintenance actions for a diesel engine of the USS Grampus given by Lee (1980) and also analyzed by Crowder, et al. (1991)  相似文献   

15.
This paper proposes a new scheme for constructing software reliability growth models (SRGM) based on a nonhomogeneous Poisson process (NHPP). The main focus is to provide an efficient parametric decomposition method for software reliability modeling, which considers both testing efforts and fault detection rates (FDR). In general, the software fault detection/removal mechanisms depend on previously detected/removed faults and on how testing efforts are used. From practical field studies, it is likely that we can estimate the testing efforts consumption pattern and predict the trends of FDR. A set of time-variable, testing-effort-based FDR models were developed that have the inherent flexibility of capturing a wide range of possible fault detection trends: increasing, decreasing, and constant. This scheme has a flexible structure and can model a wide spectrum of software development environments, considering various testing efforts. The paper describes the FDR, which can be obtained from historical records of previous releases or other similar software projects, and incorporates the related testing activities into this new modeling approach. The applicability of our model and the related parametric decomposition methods are demonstrated through several real data sets from various software projects. The evaluation results show that the proposed framework to incorporate testing efforts and FDR for SRGM has a fairly accurate prediction capability and it depicts the real-life situation more faithfully. This technique can be applied to wide range of software systems  相似文献   

16.
Monte Carlo simulation is used to assess the statistical properties of some Bayes procedures in situations where only a few data on a system governed by a NHPP (nonhomogeneous Poisson process) can be collected and where there is little or imprecise prior information available. In particular, in the case of failure truncated data, two Bayes procedures are analyzed. The first uses a uniform prior PDF (probability distribution function) for the power law and a noninformative prior PDF for α, while the other uses a uniform PDF for the power law while assuming an informative PDF for the scale parameter obtained by using a gamma distribution for the prior knowledge of the mean number of failures in a given time interval. For both cases, point and interval estimation of the power law and point estimation of the scale parameter are discussed. Comparisons are given with the corresponding point and interval maximum-likelihood estimates for sample sizes of 5 and 10. The Bayes procedures are computationally much more onerous than the corresponding maximum-likelihood ones, since they in general require a numerical integration. In the case of small sample sizes, however, their use may be justified by the exceptionally favorable statistical properties shown when compared with the classical ones. In particular, their robustness with respect to a wrong assumption on the prior β mean is interesting  相似文献   

17.
The author studies the Laplace trend test when it is used to detect software reliability growth, and proves its optimality in the frame of the most famous software reliability models. Its intuitive importance is explained, and its statistical properties are established for the five models: Goel-Okumoto, Crow, Musa-Okumoto, Littlewood-Verral, and Moranda. The Laplace test has excellent optimality properties for several models, particularly for nonhomogeneous Poisson processes (NHPPs). It is good in the Moranda model, which is not an NHPP; this justifies entirely the use of this test as a trend test. Nevertheless, the Laplace test is not completely satisfactory because neither its exact statistical-significance level, nor its power are calculable, and nothing can be said about its properties for the Littlewood-Verral method. Consequently, the author suggests that it is always better to check if it has good properties in the model, and to search for other tests whose statistical-significance level and power are calculable  相似文献   

18.
通过分析和比较当前典型的软件测试类型.构建了一个新颖的QC 3-D软件测试模型。该模型融合了软件测试过程、软件质量成本和测试等级三个部分,并定义了软件质量成本的量化公式。通过软件质量成本均衡概念,平衡软件质量成本中的控制成本和故障成本,从而调整软件测试的阶段和等级,以达到软件质量最优同时成本消耗最小的最佳值目标。  相似文献   

19.
A general software reliability model based on the nonhomogeneous Poisson process (NHPP) is used to derive a model that integrates imperfect debugging with the learning phenomenon. Learning occurs if testing appears to improve dynamically in efficiency as one progresses through a testing phase. Learning usually manifests itself as a changing fault-detection rate. Published models and empirical data suggest that efficiency growth due to learning can follow many growth-curves, from linear to that described by the logistic function. On the other hand, some recent work indicates that in a real industrial resource-constrained environment, very little actual learning might occur because nonoperational profiles used to generate test and business models can prevent the learning. When that happens, the testing efficiency can still change when an explicit change in testing strategy occurs, or it can change as a result of the structural profile of the code under test and test-case ordering  相似文献   

20.
在雷达软件开发时,需要把可靠性分析技术应用到软件开发中,识别软件故障模式,形成软件故障预防措施。文中主要研究如何在雷达软件需求开发和设计中使用可靠性分析技术,并以雷达系统中典型软件为例提出在软件工程过程中实施功能故障模式、影响及危害性分析(FMECA)和软件FMECA的技术途径。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号