首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
This paper investigates a SRGM (software reliability growth model) based on the NHPP (nonhomogeneous Poisson process) which incorporates a logistic testing-effort function. SRGM proposed in the literature consider the amount of testing-effort spent on software testing which can be depicted as an exponential curve, a Rayleigh curve, or a Weibull curve. However, it might not be appropriate to represent the consumption curve for testing-effort by one of those curves in some software development environments. Therefore, this paper shows that a logistic testing-effort function can be expressed as a software-development/test-effort curve and that it gives a good predictive capability based on real failure-data. Parameters are estimated, and experiments performed on actual test/debug data sets. Results from applications to a real data set are analyzed and compared with other existing models to show that the proposed model predicts better. In addition, an optimal software release policy for this model, based on cost-reliability criteria, is proposed  相似文献   

2.
This paper presents a NHPP-based SRGM (software reliability growth model) for NVP (N-version programming) systems (NVP-SRGM) based on the NHPP (nonhomogeneous Poisson process). Although many papers have been devoted to modeling NVP-system reliability, most of them consider only the stable reliability, i.e., they do not consider the reliability growth in NVP systems due to continuous removal of faults from software versions. The model in this paper is the first reliability-growth model for NVP systems which considers the error-introduction rate and the error-removal efficiency. During testing and debugging, when a software fault is found, a debugging effort is devoted to remove this fault. Due to the high complexity of the software, this fault might not be successfully removed, and new faults might be introduced into the software. By applying a generalized NHPP model into the NVP system, a new NVP-SRGM is established, in which the multi-version coincident failures are well modeled. A simplified software control logic for a water-reservoir control system illustrates how to apply this new software reliability model. The s-confidence bounds are provided for system-reliability estimation. This software reliability model can be used to evaluate the reliability and to predict the performance of NVP systems. More application is needed to validate fully the proposed NVP-SRGM for quantifying the reliability of fault-tolerant software systems in a general industrial setting. As the first model of its kind in NVP reliability-growth modeling, the proposed NVP SRGM can be used to overcome the shortcomings of the independent reliability model. It predicts the system reliability more accurately than the independent model and can be used to help determine when to stop testing, which is a key question in the testing and debugging phase of the NVP system-development life cycle  相似文献   

3.
软件测试覆盖率直观地描述了软件测试的程度,现有的基于测试覆盖率的软件可靠性增长模型绝大多数都没有考虑故障的排除效率.论文把软件测试覆盖率和故障排除效率引入到软件可靠性评估过程中,建立了一个既考虑测试覆盖率,又考虑故障排除效率的非齐次泊松过程类软件可靠性增长模型,在一组失效数据上的实验分析表明:对这组失效数据,论文提出的模型比其他一些非齐次泊松过程类模型的拟合效果更好.  相似文献   

4.
论述了运用两种NHPP增长型模型进行某测控系统软件可靠性预计的方法,以及它们的数学解析式,并阐明了两者的关系。针对某发射测控系统软件的测试调试过程,初步估计了软件的程序窖量。根据所得到的数据。运用两种NHPP模型计算了模型参数的估计值。预测了软件的可靠性水平和需要进行的各项软件测试的时间。  相似文献   

5.
An improved software reliability release policy is presented, based on the nonhomogeneous Poisson process (NHPP) and incorporating the effect of testing effort. Testing effort functions are modelled by exponential, Rayleigh and Weibull curves. The optimal software release time is determined by minimizing the total expected software cost under the conditions of satisfying a software reliability objective. Numerical examples have been included to illustrate the software release policy.  相似文献   

6.
In this paper, we discuss a software reliability growth model with a learning factor for imperfect debugging based on a non-homogeneous Poisson process (NHPP). Parameters used in the model are estimated. An optimal release policy is obtained for a software system based on the total mean profit and reliability criteria. A random software life-cycle is also incorporated in the discussion. Numerical results are presented in the final section.  相似文献   

7.
This paper presents a stochastic model for the software failure phenomenon based on a nonhomogeneous Poisson process (NHPP). The failure process is analyzed to develop a suitable meanvalue function for the NHPP; expressions are given for several performance measures. Actual software failure data are analyzed and compared with a previous analysis.  相似文献   

8.
Summary and Conclusions-This paper presents four models for optimizing the reliability of embedded systems considering both software and hardware reliability under cost constraints, and one model to optimize system cost under multiple reliability constraints. Previously, most optimization models have been developed for hardware-only or software-only systems by assuming the hardware, if any, has perfect reliability. In addition, they assume that failures for each hardware or software unit are statistically independent. In other words, none of the existing optimization models were developed for embedded systems (hardware and software) with failure dependencies. For our work, each of our models is suitable for a distinct set of conditions or situations. The first four models maximize reliability while meeting cost constraints, and the fifth model minimizes system cost under multiple reliability constraints. This is the first time that optimization of these kinds of models has been performed on this type of system. We demonstrate and validate our models for an embedded system with multiple applications sharing multiple resources. We use a Simulated Annealing optimization algorithm to demonstrate our system reliability optimization techniques for distributed systems, because of its flexibility for various problem types with various constraints. It is efficient, and provides satisfactory optimization results while meeting difficult-to-satisfy constraints.  相似文献   

9.
This paper proposes a new scheme for constructing software reliability growth models (SRGM) based on a nonhomogeneous Poisson process (NHPP). The main focus is to provide an efficient parametric decomposition method for software reliability modeling, which considers both testing efforts and fault detection rates (FDR). In general, the software fault detection/removal mechanisms depend on previously detected/removed faults and on how testing efforts are used. From practical field studies, it is likely that we can estimate the testing efforts consumption pattern and predict the trends of FDR. A set of time-variable, testing-effort-based FDR models were developed that have the inherent flexibility of capturing a wide range of possible fault detection trends: increasing, decreasing, and constant. This scheme has a flexible structure and can model a wide spectrum of software development environments, considering various testing efforts. The paper describes the FDR, which can be obtained from historical records of previous releases or other similar software projects, and incorporates the related testing activities into this new modeling approach. The applicability of our model and the related parametric decomposition methods are demonstrated through several real data sets from various software projects. The evaluation results show that the proposed framework to incorporate testing efforts and FDR for SRGM has a fairly accurate prediction capability and it depicts the real-life situation more faithfully. This technique can be applied to wide range of software systems  相似文献   

10.
A stochastic model (G-O) for the software failure phenomenon based on a nonhomogeneous Poisson process (NHPP) was suggested by Goel and Okumoto (1979). This model has been widely used but some important work remains undone on estimating the parameters. The authors present a necessary and sufficient condition for the likelihood estimates to be finite, positive, and unique. A modification of the G-O model is suggested. The performance measures and parametric inferences of the new model are discussed. The results of the new model are applied to real software failure data and compared with G-O and Jelinski-Moranda models  相似文献   

11.
An S-shaped software reliability growth model (SRGM) based on a non-homogeneous Poisson process (NHPP) with two types of errors has been proposed. The errors have been classified depending upon their severity. We have estimated the model parameters and obtained the optimum release policies which minimize the cost subject to achieving a given level of reliability. Numerical results illustrating the applicability of the proposed model are also presented.  相似文献   

12.
The Duane model for reliability growth involves a rate function which is an inverse power law and has an ``infinite' value at t = 0. The model is usually motivated entirely empirically. Here a probabilistic rationale is proposed via a reliability growth model involving the removal of design faults. This rationale results in a modified power law rate, finite at the origin. A wider class of rate functions should be investigated for NHPP models of reliability growth.  相似文献   

13.
We develop a moving average non-homogeneous Poisson process (MA NHPP) reliability model which includes the benefits of both time domain, and structure based approaches. This method overcomes the deficiency of existing NHPP techniques that fall short of addressing repair, and internal system structures simultaneously. Our solution adopts a MA approach to cover both methods, and is expected to improve reliability prediction. This paradigm allows software components to vary in nature, and can account for system structures due to its ability to integrate individual component reliabilities on an execution path. Component-level modeling supports sensitivity analysis to guide future upgrades, and updates. Moreover, the integration capability is a benefit for incremental software development, meaning only the affected portion needs to be re-evaluated instead of the entire package, facilitating software evolution to a higher extent than with other methods. Several experiments on different system scenarios and circumstances are discussed, indicating the usefulness of our approach.  相似文献   

14.
基于Markov过程的硬/软件综合系统可靠性分析   总被引:5,自引:0,他引:5  
于敏  何正友  钱清泉 《电子学报》2010,38(2):473-479
现代大型监控系统通常是一个复杂的硬/软件综合系统,其可靠性分析对于系统的设计、评估具有重要意义.综合考虑硬件、软件特点以及两者之间的相互作用关系,提出一种基于Markov过程的综合系统可靠性分析模型,模型中将系统失效分为硬件失效、软件失效与硬/软件结合失效.实际应用中,由于系统的状态数较大,提出利用循环网络方法对Markov状态转移方程进行求解,从而方便地得到系统处于各状态的瞬时概率与稳态概率.通过分析硬/软件综合系统可靠度、可用度与系统可靠性参数之间的关系,指出硬/软件结合失效将影响系统可用度,忽略硬/软件结合失效将导致可靠性估计值偏离实际值.  相似文献   

15.
We discuss a software reliability growth model with testing-effort based on a nonhomogeneous Poisson process and its application to a testing-effort control problem. The time-dependent behaviour of testing-effort expenditures which is incorporated into software reliability growth is expressed by a Weibull curve due to the flexibility in describing a number of testing-effort expenditure patterns. Using several sets of actual software error data, the model fitting and examples of a testing-effort control problem are illustrated.  相似文献   

16.
A general software reliability model based on the nonhomogeneous Poisson process (NHPP) is used to derive a model that integrates imperfect debugging with the learning phenomenon. Learning occurs if testing appears to improve dynamically in efficiency as one progresses through a testing phase. Learning usually manifests itself as a changing fault-detection rate. Published models and empirical data suggest that efficiency growth due to learning can follow many growth-curves, from linear to that described by the logistic function. On the other hand, some recent work indicates that in a real industrial resource-constrained environment, very little actual learning might occur because nonoperational profiles used to generate test and business models can prevent the learning. When that happens, the testing efficiency can still change when an explicit change in testing strategy occurs, or it can change as a result of the structural profile of the code under test and test-case ordering  相似文献   

17.
In this paper we give a general Markov process formulation for a software reliability model and present expressions for software performance measures. We discuss a general model and derive the maximum likelihood estimates for the required parameters of this model. The generality of this model is demonstrated by showing that the Jelinski-Moranda model and the Non-Homogeneous Poisson Process (NHPP) model are both very special cases of our model. In this process we also correct some errors in a previous paper of the NHPP model.  相似文献   

18.
Traditional approaches to software reliability modeling are black box-based; that is, the software system is considered as a whole, and only its interactions with the outside world are modeled without looking into its internal structure. The black box approach is adequate to characterize the reliability of monolithic, custom, built-to-specification software applications. However, with the widespread use of object oriented systems design & development, the use of component-based software development is on the rise. Software systems are developed in a heterogeneous (multiple teams in different environments) fashion, and hence it may be inappropriate to model the overall failure process of such systems using one of the several software reliability growth models (black box approach). Predicting the reliability of a software system based on its architecture, and the failure behavior of its components, is thus essential. Most of the research efforts in predicting the reliability of a software system based on its architecture have been focused on developing analytical or state-based models. However, the development of state-based models has been mostly ad hoc with little or no effort devoted towards establishing a unifying framework which compares & contrasts these models. Also, to the best of our knowledge, no attempt has been made to offer an insight into how these models might be applied to real software applications. This paper proposes a unifying framework for state-based models for architecture-based software reliability prediction. The state-based models we consider are the ones in which application architecture is represented either as a discrete time Markov chain (DTMC), or a continuous time Markov chain (CTMC). We illustrate the DTMC-based, and CTMC-based models using examples. A detailed discussion of how the parameters of each model may be estimated, and the life cycle phases when the model may be applied is also provided  相似文献   

19.
The author studies the Laplace trend test when it is used to detect software reliability growth, and proves its optimality in the frame of the most famous software reliability models. Its intuitive importance is explained, and its statistical properties are established for the five models: Goel-Okumoto, Crow, Musa-Okumoto, Littlewood-Verral, and Moranda. The Laplace test has excellent optimality properties for several models, particularly for nonhomogeneous Poisson processes (NHPPs). It is good in the Moranda model, which is not an NHPP; this justifies entirely the use of this test as a trend test. Nevertheless, the Laplace test is not completely satisfactory because neither its exact statistical-significance level, nor its power are calculable, and nothing can be said about its properties for the Littlewood-Verral method. Consequently, the author suggests that it is always better to check if it has good properties in the model, and to search for other tests whose statistical-significance level and power are calculable  相似文献   

20.
Real-time systems are an important class of process control systems that need to respond to events under time constraints, or deadlines. Such systems may also be required to deliver service in spite of hardware or software faults in their components. This fault-tolerant characteristic is especially critical in systems whose failure can cause economic disaster and/or loss of lives. This paper reports recent research in the area of analytical modeling of the three major characteristics of real-time systems: timeliness, dependability, and external environmental dependencies. The paper starts with a brief introduction to analytical modeling frameworks such as Markov models and stochastic petri nets. This is followed by an examination of advances in modeling response-time distributions, reliability, distributed messaging services, and software fault-tolerance in real-time systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号