首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A software reliability growth model is one of the fundamental technique to assess software reliability quantitatively. The software reliability growth model is required to have a good performance in terms of goodness-of-fit, predictability, and so forth. In this paper, we propose discretized software reliability growth models. As to the software reliability growth modeling, discretized nonhomogeneous Poisson process models are investigated particularly for accurate software reliability assessment. We show that the discrete nonhomogeneous Poisson process models have better performance than discretized deterministic software reliability growth models which have been proposed so far.  相似文献   

2.
3.
Software reliability testing is concerned with the quantitative relationship between software testing and software reliability. Our previous work develops a mathematically rigorous modeling framework for software reliability testing. However the modeling framework is confined to the case of perfect debugging, where detected defects are removed without introducing new defects. In this paper the modeling framework is extended to the case of imperfect debugging and two models are proposed. In the first model it is assumed that debugging is imperfect and may make the number of remaining defects reduce by one, remain intact, or increase by one. In the second model it is assumed that when the number of remaining defects reaches the upper bound, the probability that the number of remaining defects is increased by one by debugging is zero. The expected behaviors of the cumulative number of observed failures and the number of remaining defects in the first model show that the software testing process may induce a linear or nonlinear dynamic system, depending on the relationship between the probability of debugging introducing a new defect and that of debugging removing a detected defect. The second-order behaviors of the first model also show that in the case of imperfect debugging, although there may be unbiased estimator for the initial number of defects remaining in the software under test, the cumulative number of observed failures and the current number of remaining defects are not sufficient for precisely estimating the initial number of remaining defects. This is because the variance of the unbiased estimator approaches a non-zero constant as the software testing process proceeds. This may be treated as an intrinsic principle of uncertainty for software testing. The expected behaviors of the cumulative number of observed failures and the number of remaining defects in the second model show that the software testing process may induce a nonlinear dynamic system. However theoretical analysis and simulation results show that, if defects are more often removed from than introduced into the software under test, the expected behaviors of the two models tend to coincide with each other as the upper bound of the number of remaining defects approaches infinity.  相似文献   

4.
软件可靠性的定量评价是软件可靠性工程的关键问题之一,采用故障树方法对软件进行定性和定量分析,提出了两类情况下对影响软件可靠性的主次因素划分及其模糊权重的计算方法。在此基础上,建立多级模糊评价模型,提出了增广和聚合算法,并给出了软件可靠度算式。选择某型航空装备软件进行了测试实例分析,实验结果表明了该方法评价结构的合理性与评价算法的有效性,适用于软件质量及开发过程控制的工程实践。  相似文献   

5.
We describe the use of a latent Markov process governing the parameters of a nonhomogeneous Poisson process (NHPP) model for characterizing the software development defect discovery process. Use of a Markov switching process allows us to characterize non-smooth variations in the rate at which defects are found, better reflecting the industrial software development environment in practice. Additionally, we propose a multivariate model for characterizing changes in the distribution of defect types that are found over time, conditional on the total number of defects. A latent Markov chain governs the evolution of probabilities of the different types. Bayesian methods via Markov chain Monte Carlo facilitate inference. We illustrate the efficacy of the methods using simulated data, then apply them to model reliability growth in a large operating system software component-based on defects discovered during the system testing phase of development.  相似文献   

6.
There is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. A technique of analyzing predictive accuracy called the u-plot allows a user to estimate the relationship between the predicted reliability and the true reliability. It is shown how this can be used to improve reliability predictions in a very general way by a process of recalibration. Simulation results show that the technique gives improved reliability predictions in a large proportion of cases. However, a user does not need to trust the efficacy of recalibration, since the new reliability estimates produced by the technique are truly predictive and their accuracy in a particular application can be judged using the earlier methods. The generality of this approach suggests its use whenever a software reliability model is used. Indeed, although this work arose from the need to address the poor performance of software reliability models, it is likely to have applicability in other areas such as reliability growth modeling for hardware  相似文献   

7.
Software managers are routinely confronted with software projects that contain errors or inconsistencies and exceed budget and time limits. By mining software repositories with comprehensible data mining techniques, predictive models can be induced that offer software managers the insights they need to tackle these quality and budgeting problems in an efficient way. This paper deals with the role that the Ant Colony Optimization (ACO)-based classification technique AntMiner+ can play as a comprehensible data mining technique to predict erroneous software modules. In an empirical comparison on three real-world public datasets, the rule-based models produced by AntMiner+ are shown to achieve a predictive accuracy that is competitive to that of the models induced by several other included classification techniques, such as C4.5, logistic regression and support vector machines. In addition, we will argue that the intuitiveness and comprehensibility of the AntMiner+ models can be considered superior to the latter models.  相似文献   

8.
为了进一步提升现有非齐次泊松过程类软件可靠性增长模型的拟合和预测性能,首先从故障总数增长趋势角度对不完美排错模型进行深入研究,提出两个一般性不完美排错框架模型,分别考虑了总故障数量函数与累计检测故障函数间的线性关系与微分关系,并求得累计检测的故障数量与软件中总故障数量函数表达式;其次,在六组真实的失效数据集上对比了提出的两种一般性不完美排错模型和六种不完美排错模型拟合预测性能表现。实例验证结果表明,提出的一般性不完美排错框架模型在大多数失效数据集上都具有优秀的拟合和预测性能,证明了新建模型的有效性和实用性;通过对提出的模型与其他不完美排错模型在数据集上的性能的深入分析,为实际应用中不完美排错模型的选择提出了建议。  相似文献   

9.
This paper presents modeling frameworks for distributing development effort among software components to facilitate cost-effective progress toward a system reliability goal. Emphasis on components means that the frameworks can be used, for example, in cleanroom processes and to set certification criteria. The approach, based on reliability allocation, uses the operational profile to quantify the usage environment and a utilization matrix to link usage with system structure. Two approaches for reliability and cost planning are introduced: Reliability-Constrained Cost-Minimization (RCCM) and Budget-Constrained Reliability-Maximization (BCRM). Efficient solutions are presented corresponding to three general functions for measuring cost-to-attain failure intensity. One of the functions is shown to be a generalization of the basic COCOMO form. Planning within budget, adaptation for other cost functions and validation issues are also discussed. Analysis capabilities are illustrated using a software system consisting of 26 developed modules and one procured module. The example also illustrates how to specify a reliability certification level, and minimum purchase price, for the procured module  相似文献   

10.
Since the early 1970s tremendous growth has been seen in the research of software reliability growth modeling.In general, software reliability growth models (SRGMs) are applicable to the late stages of testing in software development and they can provide useful information about how to improve the reliability of software products.A number of SRGMs have been proposed in the literature to represent time-dependent fault identification/removal phenomenon;still new models are being proposed that could fit a greater number of reliability growth curves.Often,it is assumed that detected faults axe immediately corrected when mathematical models are developed.This assumption may not be realistic in practice because the time to remove a detected fault depends on the complexity of the fault,the skill and experience of the personnel,the size of the debugging team,the technique,and so on.Thus,the detected fault need not be immediately removed,and it may lag the fault detection process by a delay effect factor.In this paper,we first review how different software reliability growth models have been developed,where fault detection process is dependent not only on the number of residual fault content but also on the testing time,and see how these models can be reinterpreted as the delayed fault detection model by using a delay effect factor.Based on the power function of the testing time concept,we propose four new SRGMs that assume the presence of two types of faults in the software:leading and dependent faults.Leading faults are those that can be removed upon a failure being observed.However,dependent faults are masked by leading faults and can only be removed after the corresponding leading fault has been removed with a debugging time lag.These models have been tested on real software error data to show its goodness of fit,predictive validity and applicability.  相似文献   

11.
软件可靠性模型都要求测试时的操作剖面与实际运行时的操作剖面一致,但这往往很难达到,造成测试完成之后的可靠性预计与发布之后实际运行中达到的可靠性有较大差距.为了提高软件可靠性评估的准确性,提出了剖面差异性的概念,认为同一软件各个版本的测试操作剖面与实际操作剖面之间的差异性是相同的.在此前提下,提出了一个多版本校准方法,利用软件以前版本的剖面差异性来改进软件当前版本的可靠性评估.  相似文献   

12.
杨彬  陈丽容 《计算机工程与设计》2007,28(20):4839-4841,4852
研究了高可靠软件的可靠性评估技术,给出了失效数据稀少情况下的软件可靠性模型.假设高可靠软件的可靠性测试过程中发生失效是独立同分布的稀有事件,从理论上分析了极值统计理论用于软件可靠性评估的可行性,建立了软件可靠性极值统计模型,讨论了模型的参数估计方法和假设检验方法.  相似文献   

13.
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or nonhomogeneous Poisson processes, with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. The use of interval estimates is demonstrated for two data sets that have appeared in the literature  相似文献   

14.
The authors present optimization models for software systems that are developed using a modular design technique. Four different software structures are considered: one program, no redundancy; one program, with redundancy; multiple programs, no redundancy; and multiple programs, with redundancy. The optimization problems are solved by using the authors' version of established optimization methods. The practical usefulness of this study is to draw the attention of software practitioners to an existing methodology that may be used to make an optimal selection out of an available pool of modules with known reliability and cost. All four models maximize software reliability while ensuring that expenditures remain within available resources. The software manager is allowed to select the appropriate model for a given situation  相似文献   

15.
Optimal and adaptive testing for software reliability assessment   总被引:4,自引:0,他引:4  
Optimal software testing is concerned with how to test software such that the underlying testing goal is achieved in an optimal manner. Our previous work shows that the optimal testing problem for software reliability growth can be treated as closed-loop or feedback control problem, where the software under test serves as a controlled object and the software testing strategy serves as the corresponding controller. More specifically, the software under test is modeled as controlled Markov chains (CMCs) and the control theory of Markov chains is used to synthesize the required optimal testing strategy. In this paper, we show that software reliability assessment can be treated as a feedback control problem and the CMC approach is also applicable to dealing with the optimal testing problem for software reliability assessment. In this problem, the code of the software under test is frozen and the software testing process is optimized in the sense that the variance of the software reliability estimator is minimized. An adaptive software testing strategy is proposed that uses the testing data collected on-line to estimate the required parameters and selects next test cases. Simulation results show that the proposed adaptive software testing strategy can really work in the sense that the resulting variance of the software reliability estimate is much smaller than that resulting from the random testing strategies. The work presented in this paper is a contribution to the new area of software cybernetics that explores the interplay between software and control.  相似文献   

16.
The debate between those who prefer formal methods and those who advocate the use of reliability-growth models in the assessment of software reliability continues. Some issues arising from this conflict are raised and discussed, beginning with the definition of quantified software reliability. After arguing that the stochastic modelling approach is conceptually sound, its possible relationship with the structure of the software is discussed. Evidence that certain structural counts can be used in place of time as the argument in reliability growth models is reported. The question of the extent to which the many non-stochastic metrics now available contribute to software reliability quantification is aired. A relationship between reliability and a hierarchy of coverage metrics is reported, which may help to draw together the modellers and the testers.  相似文献   

17.
The difficulties of building generic reliability models for software   总被引:1,自引:0,他引:1  
The Software Engineering research community have spent considerable effort in developing models to predict the behaviour of software. A number of these models have been derived based on the pre and post behaviour of the development of software products, but when these models are applied to other products, the results are often disappointing. This appears to differentiate Software from other engineering disciplines that often depend on generic predictive models to verify the correctness of their products. This short paper discusses why other engineering disciplines have managed to create generalized models, the challenges faced by the Software industry to build these models, and the change we have made to our process in Microsoft to address some of these challenges.  相似文献   

18.
A comparison of time domains (i.e., execution time vs. calendar time) is made for software reliability models, with the purpose of reaching some general conclusions about their relative desirability. The comparison is made by using a generic failure intensity function that represents a large majority of the principal models. The comparison is based on how well the function fits the estimated failure intensity, where the failure intensity is estimated with respect to both kinds of time. The failure intensity in each time domain is examined for trends. Failure intensity estimates are calculated from carefully collected data. The execution time domain is found to be highly superior to the calendar time domain.  相似文献   

19.
Software Quality Journal - The phenomenon of software aging refers to the continuing degradation of software system performance with the operation time and is usually caused by the aging-related...  相似文献   

20.
The usefulness of connectionist models for software reliability growth prediction is illustrated. The applicability of the connectionist approach is explored using various network models, training regimes, and data representation methods. An empirical comparison is made between this approach and five well-known software reliability growth models using actual data sets from several different software projects. The results presented suggest that connectionist models may adapt well across different data sets and exhibit a better predictive accuracy. The analysis shows that the connectionist approach is capable of developing models of varying complexity  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号