共查询到20条相似文献,搜索用时 0 毫秒
1.
《Computers & Mathematics with Applications》2006,51(2):161-170
A software reliability growth model is one of the fundamental technique to assess software reliability quantitatively. The software reliability growth model is required to have a good performance in terms of goodness-of-fit, predictability, and so forth. In this paper, we propose discretized software reliability growth models. As to the software reliability growth modeling, discretized nonhomogeneous Poisson process models are investigated particularly for accurate software reliability assessment. We show that the discrete nonhomogeneous Poisson process models have better performance than discretized deterministic software reliability growth models which have been proposed so far. 相似文献
2.
3.
Kai-Yuan Cai Ping Cao Zhao Dong Ke Liu 《Computers & Mathematics with Applications》2010,59(10):3245-3285
Software reliability testing is concerned with the quantitative relationship between software testing and software reliability. Our previous work develops a mathematically rigorous modeling framework for software reliability testing. However the modeling framework is confined to the case of perfect debugging, where detected defects are removed without introducing new defects. In this paper the modeling framework is extended to the case of imperfect debugging and two models are proposed. In the first model it is assumed that debugging is imperfect and may make the number of remaining defects reduce by one, remain intact, or increase by one. In the second model it is assumed that when the number of remaining defects reaches the upper bound, the probability that the number of remaining defects is increased by one by debugging is zero. The expected behaviors of the cumulative number of observed failures and the number of remaining defects in the first model show that the software testing process may induce a linear or nonlinear dynamic system, depending on the relationship between the probability of debugging introducing a new defect and that of debugging removing a detected defect. The second-order behaviors of the first model also show that in the case of imperfect debugging, although there may be unbiased estimator for the initial number of defects remaining in the software under test, the cumulative number of observed failures and the current number of remaining defects are not sufficient for precisely estimating the initial number of remaining defects. This is because the variance of the unbiased estimator approaches a non-zero constant as the software testing process proceeds. This may be treated as an intrinsic principle of uncertainty for software testing. The expected behaviors of the cumulative number of observed failures and the number of remaining defects in the second model show that the software testing process may induce a nonlinear dynamic system. However theoretical analysis and simulation results show that, if defects are more often removed from than introduced into the software under test, the expected behaviors of the two models tend to coincide with each other as the upper bound of the number of remaining defects approaches infinity. 相似文献
4.
We describe the use of a latent Markov process governing the parameters of a nonhomogeneous Poisson process (NHPP) model for characterizing the software development defect discovery process. Use of a Markov switching process allows us to characterize non-smooth variations in the rate at which defects are found, better reflecting the industrial software development environment in practice. Additionally, we propose a multivariate model for characterizing changes in the distribution of defect types that are found over time, conditional on the total number of defects. A latent Markov chain governs the evolution of probabilities of the different types. Bayesian methods via Markov chain Monte Carlo facilitate inference. We illustrate the efficacy of the methods using simulated data, then apply them to model reliability growth in a large operating system software component-based on defects discovered during the system testing phase of development. 相似文献
5.
Brocklehurst S. Chan P.Y. Littlewood B. Snell J. 《IEEE transactions on pattern analysis and machine intelligence》1990,16(4):458-470
There is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. A technique of analyzing predictive accuracy called the u-plot allows a user to estimate the relationship between the predicted reliability and the true reliability. It is shown how this can be used to improve reliability predictions in a very general way by a process of recalibration. Simulation results show that the technique gives improved reliability predictions in a large proportion of cases. However, a user does not need to trust the efficacy of recalibration, since the new reliability estimates produced by the technique are truly predictive and their accuracy in a particular application can be judged using the earlier methods. The generality of this approach suggests its use whenever a software reliability model is used. Indeed, although this work arose from the need to address the poor performance of software reliability models, it is likely to have applicability in other areas such as reliability growth modeling for hardware 相似文献
6.
Software managers are routinely confronted with software projects that contain errors or inconsistencies and exceed budget and time limits. By mining software repositories with comprehensible data mining techniques, predictive models can be induced that offer software managers the insights they need to tackle these quality and budgeting problems in an efficient way. This paper deals with the role that the Ant Colony Optimization (ACO)-based classification technique AntMiner+ can play as a comprehensible data mining technique to predict erroneous software modules. In an empirical comparison on three real-world public datasets, the rule-based models produced by AntMiner+ are shown to achieve a predictive accuracy that is competitive to that of the models induced by several other included classification techniques, such as C4.5, logistic regression and support vector machines. In addition, we will argue that the intuitiveness and comprehensibility of the AntMiner+ models can be considered superior to the latter models. 相似文献
7.
Helander M.E. Ming Zhao Ohlsson N. 《IEEE transactions on pattern analysis and machine intelligence》1998,24(6):420-434
This paper presents modeling frameworks for distributing development effort among software components to facilitate cost-effective progress toward a system reliability goal. Emphasis on components means that the frameworks can be used, for example, in cleanroom processes and to set certification criteria. The approach, based on reliability allocation, uses the operational profile to quantify the usage environment and a utilization matrix to link usage with system structure. Two approaches for reliability and cost planning are introduced: Reliability-Constrained Cost-Minimization (RCCM) and Budget-Constrained Reliability-Maximization (BCRM). Efficient solutions are presented corresponding to three general functions for measuring cost-to-attain failure intensity. One of the functions is shown to be a generalization of the basic COCOMO form. Planning within budget, adaptation for other cost functions and validation issues are also discussed. Analysis capabilities are illustrated using a software system consisting of 26 developed modules and one procured module. The example also illustrates how to specify a reliability certification level, and minimum purchase price, for the procured module 相似文献
8.
软件可靠性模型都要求测试时的操作剖面与实际运行时的操作剖面一致,但这往往很难达到,造成测试完成之后的可靠性预计与发布之后实际运行中达到的可靠性有较大差距.为了提高软件可靠性评估的准确性,提出了剖面差异性的概念,认为同一软件各个版本的测试操作剖面与实际操作剖面之间的差异性是相同的.在此前提下,提出了一个多版本校准方法,利用软件以前版本的剖面差异性来改进软件当前版本的可靠性评估. 相似文献
9.
Berman O. Ashrafi N. 《IEEE transactions on pattern analysis and machine intelligence》1993,19(11):1119-1123
The authors present optimization models for software systems that are developed using a modular design technique. Four different software structures are considered: one program, no redundancy; one program, with redundancy; multiple programs, no redundancy; and multiple programs, with redundancy. The optimization problems are solved by using the authors' version of established optimization methods. The practical usefulness of this study is to draw the attention of software practitioners to an existing methodology that may be used to make an optimal selection out of an available pool of modules with known reliability and cost. All four models maximize software reliability while ensuring that expenditures remain within available resources. The software manager is allowed to select the appropriate model for a given situation 相似文献
10.
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or nonhomogeneous Poisson processes, with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. The use of interval estimates is demonstrated for two data sets that have appeared in the literature 相似文献
11.
Optimal and adaptive testing for software reliability assessment 总被引:4,自引:0,他引:4
Optimal software testing is concerned with how to test software such that the underlying testing goal is achieved in an optimal manner. Our previous work shows that the optimal testing problem for software reliability growth can be treated as closed-loop or feedback control problem, where the software under test serves as a controlled object and the software testing strategy serves as the corresponding controller. More specifically, the software under test is modeled as controlled Markov chains (CMCs) and the control theory of Markov chains is used to synthesize the required optimal testing strategy. In this paper, we show that software reliability assessment can be treated as a feedback control problem and the CMC approach is also applicable to dealing with the optimal testing problem for software reliability assessment. In this problem, the code of the software under test is frozen and the software testing process is optimized in the sense that the variance of the software reliability estimator is minimized. An adaptive software testing strategy is proposed that uses the testing data collected on-line to estimate the required parameters and selects next test cases. Simulation results show that the proposed adaptive software testing strategy can really work in the sense that the resulting variance of the software reliability estimate is much smaller than that resulting from the random testing strategies. The work presented in this paper is a contribution to the new area of software cybernetics that explores the interplay between software and control. 相似文献
12.
Alan Veevers 《Software Testing, Verification and Reliability》1991,1(1):17-22
The debate between those who prefer formal methods and those who advocate the use of reliability-growth models in the assessment of software reliability continues. Some issues arising from this conflict are raised and discussed, beginning with the definition of quantified software reliability. After arguing that the stochastic modelling approach is conceptually sound, its possible relationship with the structure of the software is discussed. Evidence that certain structural counts can be used in place of time as the argument in reliability growth models is reported. The question of the extent to which the many non-stochastic metrics now available contribute to software reliability quantification is aired. A relationship between reliability and a hierarchy of coverage metrics is reported, which may help to draw together the modellers and the testers. 相似文献
13.
Brendan Murphy 《Empirical Software Engineering》2012,17(1-2):18-22
The Software Engineering research community have spent considerable effort in developing models to predict the behaviour of software. A number of these models have been derived based on the pre and post behaviour of the development of software products, but when these models are applied to other products, the results are often disappointing. This appears to differentiate Software from other engineering disciplines that often depend on generic predictive models to verify the correctness of their products. This short paper discusses why other engineering disciplines have managed to create generalized models, the challenges faced by the Software industry to build these models, and the change we have made to our process in Microsoft to address some of these challenges. 相似文献
14.
A comparison of time domains (i.e., execution time vs. calendar time) is made for software reliability models, with the purpose of reaching some general conclusions about their relative desirability. The comparison is made by using a generic failure intensity function that represents a large majority of the principal models. The comparison is based on how well the function fits the estimated failure intensity, where the failure intensity is estimated with respect to both kinds of time. The failure intensity in each time domain is examined for trends. Failure intensity estimates are calculated from carefully collected data. The execution time domain is found to be highly superior to the calendar time domain. 相似文献
15.
Software Quality Journal - The phenomenon of software aging refers to the continuing degradation of software system performance with the operation time and is usually caused by the aging-related... 相似文献
16.
Karunanithi N. Whitley D. Malaiya Y.K. 《IEEE transactions on pattern analysis and machine intelligence》1992,18(7):563-574
The usefulness of connectionist models for software reliability growth prediction is illustrated. The applicability of the connectionist approach is explored using various network models, training regimes, and data representation methods. An empirical comparison is made between this approach and five well-known software reliability growth models using actual data sets from several different software projects. The results presented suggest that connectionist models may adapt well across different data sets and exhibit a better predictive accuracy. The analysis shows that the connectionist approach is capable of developing models of varying complexity 相似文献
17.
Two unusual methods for debugging system software 总被引:1,自引:0,他引:1
Mark Rain 《Software》1973,3(1):61-63
This paper describes the Bug Farm, a generator of test cases for compilers, and the Bug Contest, an administrative technique for speeding the process of testing system software. The techniques achieve a high rate of bug detection while minimizing user revulsion at undebugged software. 相似文献
18.
Kai-Yuan Cai Author Vitae Chang-Hai Jiang Author VitaeAuthor Vitae Cheng-Gang Bai Author Vitae 《Journal of Systems and Software》2008,81(8):1406-1429
Adaptive testing is a new form of software testing that is based on the feedback and adaptive control principle and can be treated as the software testing counterpart of adaptive control. Our previous work has shown that adaptive testing can be formulated and guided in theory to minimize the variance of an unbiased software reliability estimator and to achieve optimal software reliability assessment. In this paper, we present an experimental study of adaptive testing for software reliability assessment, where the adaptive testing strategy, the random testing strategy and the operational profile based testing strategy were applied to the Space program in four experiments. The experimental results demonstrate that the adaptive testing strategy can really work in practice and may noticeably outperform the other two. Therefore, the adaptive testing strategy can serve as a preferable alternative to the random testing strategy and the operational profile based testing strategy if high confidence in the reliability estimates is required or the real-world operational profile of the software under test cannot be accurately identified. 相似文献
19.
A set of linear combination software reliability models that combine the results of single, or component, models is presented. It is shown that, as measured by statistical methods for determining a model's applicability to a set of failure data, a combination model tends to have more accurate short-term and long-term predictions than a component model. These models were evaluated using both historical data sets and data from recent Jet Propulsion Laboratory projects. The computer-aided software reliability estimation (CASRE) tool, which automates many reliability measurement tasks and makes it easier to apply reliability models and to form combination models, is described 相似文献
20.
Software reliability growth models attempt to forecast the future reliability of a software system, based on observations of the historical occurrences of failures. This allows management to estimate the failure rate of the system in field use, and to set release criteria based on these forecasts. However, the current software reliability growth models have never proven to be accurate enough for widespread industry use. One possible reason is that the model forms themselves may not accurately capture the underlying process of fault injection in software; it has been suggested that fault injection is better modeled as a chaotic process rather than a random one. This possibility, while intriguing, has not yet been evaluated in large-scale, modern software reliability growth datasets.We report on an analysis of four software reliability growth datasets, including ones drawn from the Android and Mozilla open-source software communities. These are the four largest software reliability growth datasets we are aware of in the public domain, ranging from 1200 to over 86,000 observations. We employ the methods of nonlinear time series analysis to test for chaotic behavior in these time series; we find that three of the four do show evidence of such behavior (specifically, a multifractal attractor). Finally, we compare a deterministic time series forecasting algorithm against a statistical one on both datasets, to evaluate whether exploiting the apparent chaotic behavior might lead to more accurate reliability forecasts. 相似文献