首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A detailed statistical analysis of the Moranda geometric de-eutrophication software-reliability model, which appears to be a particular case of a class of general models with proportional failure rates, is given. Statistical inference on the unknown parameters is discussed. The distribution of the maximum-likelihood estimator of the main parameter provides exact confidence intervals and a novel reliability-growth test. Explicit estimators based on a least-squares method are proposed. The model is satisfactorily applied to real software error data. The geometric de-eutrophication model presents interesting theoretical and practical aspects. It is conceptually well founded and the parameters, which have useful practical interpretation, can be estimated by methods which provide prediction with good statistical properties  相似文献   

2.
In this paper we present a general shock model for software failures. We give the usually used Jelinski and Moranda model a new interpretation. Some other specific models which may be more realistic are also derived.  相似文献   

3.
The authors present a new software reliability model, called the lognormal proportional model (LPM). It belongs to the class of proportional models and can be viewed as a Bayes generalization of Moranda's geometric de-eutrophication model or deterministic proportional model (DPM). It is based on the idea that the modeling of software improvement should be stochastic rather than deterministic. The LPM appears to be a variance components linear model that leads to the computation of several estimators of the parameters. The authors present a statistical test to compare the goodness-of-fit of the general LPM and the DPM, for a given realization of the failure process. An application to actual software failure data is briefly described. The LPM fits most data sets better than the DPM. This emphasizes the great variability of most software reliability data  相似文献   

4.
5.
This paper studies a geometric-process maintenance-model for a deteriorating system under a random environment. Assume that the number of random shocks, up to time t, produced by the random environment forms a counting process. Whenever a random shock arrives, the system operating time is reduced. The successive reductions in the system operating time are statistically independent and identically distributed random variables. Assume that the consecutive repair times of the system after failures, form an increasing geometric process; under the condition that the system suffers no random shock, the successive operating times of the system after repairs constitute a decreasing geometric process. A replacement policy N, by which the system is replaced at the time of the failure N, is adopted. An explicit expression for the average cost rate (long-run average cost per unit time) is derived. Then, an optimal replacement policy is determined analytically. As a particular case, a compound Poisson process model is also studied.  相似文献   

6.
The Prentice, Williams, Peterson (PWP) semiparametric model for the failure processes of repairable systems involves regression on explanatory variables across strata defined by the failure-event count. A heuristic method is developed for assessing the robustness of the PWP model, where the true underlying process is nonhomogeneous Poisson with power-law intensity function. The PWP model performed well for large samples and increasing rates of occurrence of failures and poorly for small samples and decreasing rates of occurrence of failures  相似文献   

7.
Burn-in is an important screening method used in predicting, achieving, and enhancing field reliability. Although electronics burn-in has been studied qualitatively, no comprehensive quantitative approach exists for determining optimal burn-in periods. This paper presents a cost-optimization model from a system viewpoint, with burn-in periods for the components as the decision variables. This model is applied to an electronic product recently developed which uses many ICs. State-of-the-art ICs have high early-failure rates and long infant mortality periods. Proper use of burn-in reduces early failure rates and reduces system deployment costs. The total cost to be minimized is formulated as a function of the mean costs of the component, device burn-in, shop repair, and field repair, which in turn are functions of the mean number of failures during and after burn-in. Component and system reliability are constraints that have to be satisfied. The early device failures are assumed to have a Weibull distribution. The formulated problem, with failure rates and cost factors, is optimized. Some basic properties of reliability and cost functions are discussed.  相似文献   

8.
A general procedure incorporates common-cause (CC) failures into system analysis by an implicit method; i.e., after first solving the system probability equation without CC failures. Components of subsets are assumed to be equally vulnerable to CC of any particular multiplicity. The method allows for age-dependent hazard rates, repairable and nonrepairable components, systems with multiple CC groups, and systems where not all components are statistically-identical or subject to CC failures. Key equations are given both for reliability block-diagrams and fault-trees (success and failure models), considering the system reliability, availability and failure intensity functions. Initial failures and certain human errors are included, mainly for standby-system applications. The implicit method can dramatically simplify the Boolean manipulation and quantification of fault trees. Possible limitations and extensions are discussed  相似文献   

9.
The authors (Proc. Eighth Int. Conf. Software Eng., London, England, p.343-4, 1985) previously introduced a nonparametric model for software-reliability growth which is based on complete monotonicity of the failure rate. The authors extend the completely monotone software model by developing a method for providing long-range predictions of reliability growth, based on the model. They derive upper and lower bounds on extrapolation of the failure rate and the mean function. These are then used to obtain estimates for the future software failure rate and the mean future number of failures. Preliminary evaluation indicates that the method is competitive with parametric approaches, while being more robust  相似文献   

10.
The penalized likelihood method is used for a new semi-parametric software reliability model. This new model is a nonparametric generalization of all parametric models where the failure intensity function depends only on the number of observed failures, viz. number-of-failures models (NF). Experimental results show that the semi-parametric model generally fits better and has better 1-step predictive quality than parametric NF. Using generalized linear models, this paper presents new parametric models (polynomial models) that have performances (deviance and predictive-qualities) approaching those of the semi-parametric model. Graphical and statistical techniques are used to choose the appropriate polynomial model for each data-set. The polynomial models are a very good compromise between the nonvalidity of the simple assumptions of classical NF, and the complexity of use and interpretation of the semi-parametric model. The latter represents a reference model that we approach by choosing adequate link and regression functions for the polynomial models  相似文献   

11.
By defining a module to be a coherent subsystem of independently operating components each with a constant failure rate, this article derives expressions for the reliability of a standby redundant system consisting of an operating module together with a cold or warm standby module. The closed form reliability expressions are dependent upon the minimal path sets of each module as well as the component failure rates. Expressions are also derived for the mean time to system failure as well as the variance of the system time to failure distribution.  相似文献   

12.
Two broad categories of human error occur during software development: (1) development errors made during requirements analysis, design, and coding activities; (2) debugging errors made during attempts to remove faults identified during software inspections and dynamic testing. This paper describes a stochastic model that relates the software failure intensity function to development and debugging error occurrence throughout all software life-cycle phases. Software failure intensity is related to development and debugging errors because data on development and debugging errors are available early in the software life-cycle and can be used to create early predictions of software reliability. Software reliability then becomes a variable which can be controlled up front, viz, as early as possible in the software development life-cycle. The model parameters were derived based on data reported in the open literature. A procedure to account for the impact of influencing factors (e.g., experience, schedule pressure) on the parameters of this stochastic model is suggested. This procedure is based on the success likelihood methodology (SLIM). The stochastic model is then used to study the introduction and removal of faults and to calculate the consequent failure intensity value of a small-software developed using a waterfall software development  相似文献   

13.
A variation of the Jelinski/Moranda model is described. The main feature of this new model is that the variable (growing) size of a developing program is accommodated, so that the quality of a program can be estimated by analyzing an initial segment of the written code. Two parameters are estimated from the data. The data are: a) time separations between error detections, b) the number of errors per written instruction, c)the failure rate (or finding rate) of a single error, and d) a time record of the number of instructions under test. This model permits predictions of MTTF and error content of any software package which is homogenous with respect to its complexity (error making/finding). It assists in determining the quality, as measured by error contents, early on, and could eliminate the present practice of applying models to the wrong regimes (decreasing failure rate models applied to growing-in-size software packages). The growth model is very tractable analytically. The important requirement for applications is that the error-making rate must be constant across the entire software program.  相似文献   

14.
This paper presents the stochastic analysis of repairable systems involving primary as well as secondary failures. To this end, two models are considered. The first model represents a system with two identical warm standbys. The failure rates of units and the system are constant and independent while the repair times are arbitrarily distributed. The second system modeled consists of three repairable regions. The system operates normally if all three regions are operating, otherwise it operates at a derated level unless all three regions fail. The failure rates and repair times of the regions are constant and independent. The first model is analyzed using the supplemental variable technique while the second model is analyzed using the regenerative point technique in the Markov renewal process. Various expressions including system availability, system reliability and mean time to system failure are obtained.  相似文献   

15.
A dynamic analytic solution is described for a 2N state general availability model with N components having constant failure and repair rates. From this model, a family of models is developed using truncation and/or attenuation of transition rates. Expressions are derived for steady-state solutions. Then spread-sheet programs are: (1) given for obtaining these solutions, and (2) compared with BASIC programs yielding the same results. State probabilities of these truncation and level-attenuation models are either greater than or less than comparable states in the general model. Thus the states of the general model become either lower bounds or upper bounds for states in these two model types. Other bounds can be constructed from single exponentials based on steady-state probabilities. From this family of models, bounds should exist on state probabilities in models of similar structure but different constraints on failure and repair rates. A specific model is pursued where failures are restricted to any 2 components; and the failure rate of one component is assumed to change on second level of failure. Under these conditions, dynamic bounds on state-probabilities of the initial-state and some, but not all, steady state bounds on the other state probabilities can be found. Examples illustrate various bounds  相似文献   

16.
Generally the reliability of complex repairable systems is characterized by time dependent rate of occurence of failures. The corresponding mathematical model is the nonhomogeneous Poisson process. This paper comprises mathematical properties of the nonhomogeneous Poisson process, some frequently used types of failure intensity functions, and some results and aspects of statistical inference. The given models are applicable to the reliability of complex repairable systems, software systems, and to the reliability growth problemacy.  相似文献   

17.
This paper presents an approach to system reliability modeling where failures and errors are not statistically independent. The repetition of failures and errors until their causes are removed is affected by the system processes and degrades system reliability. Four types of failures are introduced: hardware transients, software and hardware design errors, and program faults. Probability of failure, mean time to failure, and system reliability depend on the type of failure. Actual measurements show that the most critical factor for system reliability is the time after occurrence of a failure when this failure can be repeated in every process that accesses a failed component. An example involving measurements collected in an IBM 4331 installation validates the model and shows its applications. The degradation of system reliability can be appreciable even for very short periods of time. This is why the conditional probability of repetition of failures is introduced. The reliability model allows prediction of system reliability based on the calculation of the mean time to failure. The comparison with the measurement results shows that the model with process dependent repetition of failures approximates system reliability with better accuracy than the model with the assumption of independent failures.  相似文献   

18.
This research effort has developed a mathematical model for bathtub shaped hazards (failure rates) for operating systems with uncensored data. The model will be used to predict the reliability of systems with such hazards. Early in the life-time of a system, there may be a relatively large number of failures due to initial weaknesses or defects in materials and manufacturing processes. This period is called the “infant mortality” period. During the middle period of an operating system fewer failures occur and are caused when the environmental stresses exceed the design strength of the system. It is difficult to predict the environmental stress amplitudes or the system strengths as deterministic functions of time, thus the middle-life failures are often called “random failures.” As the system ages, it deterioates and more failures occur. This region of failure is called the “wearout” period. Graphing these failure rates simultaneously will result in a bathtub shaped curve. The model developed for this bathtub pattern of failure takes into account all three failure regions simultaneously. The model has been validated for accuracy by using Halley's mortality table and is used to predict the reliability with both least squares and maximum likelihood estimators.  相似文献   

19.
This paper presents a model representing a two units active and one unit on standby human-machine system with general failed system repair time distribution. In addition, the model takes into consideration the occurrence of common-cause failures. The method of linear ordinary differential equation is presented to obtain general expressions for system steady state availability for failed system repair time distributions such as Gamma, Weibull, lognormal, exponential, and Rayleigh. Generalized expressions for system reliability, time-dependent availability, mean time to failure, and system variance of time to failure are also presented. Selected plots are presented to demonstrate the impact of human error on system steady state availability, reliability, time-dependent availability, and mean time to failure.  相似文献   

20.
A two-unit standby redundant system with two types of failures is considered. Probability of occurrence of each type of failure is known. The Laplace transform (LT) of the survivor function of time up to the first system failure and the mean (TFSF) are derived. Finally, a theorem about the effect of preventive maintenance (PM) is proved.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号