首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The component failure probability estimates from analysis of binomial system testing data are very useful because they reflect the operational failure probability of components in the field which is similar to the test environment. In practice, this type of analysis is often confounded by the problem of data masking: the status of tested components is unknown. Methods in considering this type of uncertainty are usually computationally intensive and not practical to solve the problem for complex systems. In this paper, we consider masked binomial system testing data and develop a probabilistic model to efficiently estimate component failure probabilities. In the model, all system tests are classified into test categories based on component coverage. Component coverage of test categories is modeled by a bipartite graph. Test category failure probabilities conditional on the status of covered components are defined. An EM algorithm to estimate component failure probabilities is developed based on a simple but powerful concept: equivalent failures and tests. By simulation we not only demonstrate the convergence and accuracy of the algorithm but also show that the probabilistic model is capable of analyzing systems in series, parallel and any other user defined structures. A case study illustrates an application in test case prioritization.  相似文献   

2.
This article illustrates a method by which arbitrarily complex series/parallel reliability systems can be analyzed. The method is illustrated with the series–parallel and parallel–series systems. Analytical expressions are determined for the investments and utilities of the defender and the attacker, depend on their unit costs of investment for each component, the contest intensity for each component, and their evaluations of the value of system functionality. For a series–parallel system, infinitely many components in parallel benefit the defender maximally regardless of the finite number of parallel subsystems in series. Conversely, infinitely many components in series benefit the attacker maximally regardless of the finite number of components in parallel in each subsystem. For a parallel–series system, the results are opposite. With equivalent components, equal unit costs for defender and attacker, equal intensity for all components, and equally many components in series and parallel, the defender always prefers the series–parallel system rather than the parallel–series system, and converse holds for the attacker. Hence from the defender's perspective, ceteris paribus, the series–parallel system is more reliable, and has fewer “cut sets” or failure modes.  相似文献   

3.
In this paper, we model embedded system design and optimization, considering component redundancy and uncertainty in the component reliability estimates. The systems being studied consist of software embedded in associated hardware components. Very often, component reliability values are not known exactly. Therefore, for reliability analysis studies and system optimization, it is meaningful to consider component reliability estimates as random variables with associated estimation uncertainty. In this new research, the system design process is formulated as a multiple-objective optimization problem to maximize an estimate of system reliability, and also, to minimize the variance of the reliability estimate. The two objectives are combined by penalizing the variance for prospective solutions. The two most common fault-tolerant embedded system architectures, N-Version Programming and Recovery Block, are considered as strategies to improve system reliability by providing system redundancy. Four distinct models are presented to demonstrate the proposed optimization techniques with or without redundancy. For many design problems, multiple functionally equivalent software versions have failure correlation even if they have been independently developed. The failure correlation may result from faults in the software specification, faults from a voting algorithm, and/or related faults from any two software versions. Our approach considers this correlation in formulating practical optimization models. Genetic algorithms with a dynamic penalty function are applied in solving this optimization problem, and reasonable and interesting results are obtained and discussed.  相似文献   

4.
In real systems components often work in essentially different operating modes, characterized by changing load or environmental conditions. Those modes result in different failure rates and lifetime distributions. In this paper we present a model for a single switch-over between two distinct lifetime distributions. In our approach we take into account the deterioration of a component. This is of great importance to model the effects of load changes on the lifetime distribution. The single component system and the one-out-of-two:G system are discussed.  相似文献   

5.
The exponential distribution is inadequate as a failure time model for most components; however, under certain conditions (in particular, that component failure rates are small and mutually independent, and failed components are immediately replaced or perfectly repaired), it is applicable to complex repairable systems with large numbers of components in series, regardless of component distributions, as shown by Drenick in 1960. This result implies that system behavior may become simpler as more components are added. We review necessary conditions for the result and present some simulation studies to assess how well it holds in systems with finite numbers of components. We also note that Drenick's result is analogous to similar results in other systems disciplines, again resulting in simpler behavior as the number of entities in the system increases.  相似文献   

6.
We consider coherent systems with components whose exchangeable lifetime distributions come from the failure-dependent proportional hazard model, i.e., the consecutive failures satisfy the assumptions of the generalized order statistics model. For a fixed system and given failure rate proportion jumps, we provide sharp bounds on the deviations of system lifetime distribution quantiles from the respective quantiles of single component nominal and actual lifetime distributions. The bounds are expressed in the scale units generated by the absolute moments of various orders of the component lifetime centered about the median of its distribution.  相似文献   

7.
Over the years, several tools have been developed to estimate the reliability of hardware and software components. Typically, such tools are either for hardware or software. This paper presents the Software Tool for Reliability Estimation (STORE), which can be used for systems containing hardware and/or software components. For software components, exponential, Weibull, gamma, power, geometric, and inverse-linear models were implemented. Goodness of fit statistics are provided for each model. The user can select the most appropriate model for a given system configuration and failure data. The STORE program can analyze series, parallel, and complex systems. Tieset and cutset algorithm is utilized to determine the reliability of a complex system. The paper presents several examples to demonstrate the software tool.  相似文献   

8.
一种新的体系可靠度的近似计算方法   总被引:7,自引:1,他引:6  
提出了一种新的体系可靠度的近似计算方法。首先将一个线性失效模式在另一线性失效模式失效前提下的条件失效模式近似为一个等价的线性失效模式,然后利用条件概率的基本原理并结合所推导的等价失效模式的理论表达式,提出了一种并联体系可靠度的近似计算方法,从而将求解一组失效模式交集失效概率的复杂问题转化为求解一组线性等价失效模式失效概率乘积的简单问题。由于串联体系以及串并联混合体系均可表示为一系列失效模式交集的线性组合,因此方法可以很方便地应用于串联体系以及串并联混合体系的体系可靠度计算。此算法计算过程简单、计算量小、计算精度高,适合于大型结构体系的体系可靠度计算。  相似文献   

9.
A case study for quantifying system reliability and uncertainty   总被引:1,自引:0,他引:1  
The ability to estimate system reliability with an appropriate measure of associated uncertainty is important for understanding its expected performance over time. Frequently, obtaining full-system data is prohibitively expensive, impractical, or not permissible. Hence, methodology which allows for the combination of different types of data at the component or subsystem levels can allow for improved estimation at the system level. We apply methodologies for aggregating uncertainty from component-level data to estimate system reliability and quantify its overall uncertainty. This paper provides a proof-of-concept that uncertainty quantification methods using Bayesian methodology can be constructed and applied to system reliability problems for a system with both series and parallel structures.  相似文献   

10.
Life data from systems of components are often analysed to estimate the reliability of the individual components. These estimates are useful since they reflect the reliability of the components under actual operating conditions. However, owing to the cost or time involved with failure analysis, the exact component causing system failure may be unknown or ‘masked’. That is, the cause may only be isolated to some subset of the system's components. We present an iterative approach for obtaining component reliability estimates from such data for series systems. The approach is analogous to traditional probability plotting. That is, it involves the fitting of a parametric reliability function to a set of nonparametric reliability estimates (plotting points). We present a numerical example assuming Weibull component life distributions and a two-component series system. In this example we find estimates with only 4 per cent of the computation time required to find comparable MLEs.  相似文献   

11.
Reliability assessments of repairable (electronic) equipment are often based on failure data recorded under field conditions. The main objective in the analyses is to provide information that can be used in improving the reliability through design changes. For this purpose it is of particular interest to be able to locate ‘trouble-makers’, i.e. components that are particular likely to fail. In the present context, reliability is measured in terms of the mean cumulative number of failures as a function of time. This function may be considered for the system as a whole, or for stratified data. The stratification is obtained by sorting data according to different factors, such as component positions, production series, etc. The mean cumulative number of failures can then be estimated either nonparametrically as an average of the observed failures, or parametrically, if a certain model for the lifetimes of the components involved is assumed. As an example we here consider a simple component lifetime model based on the assumption that components are ‘drawn’ randomly from a heterogeneous population, where a small proportion of the components are weak (with a small mean lifetime), and the remaining are standard components (with a large mean lifetime). This model enables formulation of an analytical expression for the mean cumulative number of failures. In both the nonparametric and the parametric case the uncertainty of the estimation may be assessed by computing a confidence interval for the estimated values (a confidence band for the estimated time functions). The determination of confidence bands provides a basis for assessing the significance of the factors underlying the stratification. The methods are illustrated through an industrial case study using field failure data.  相似文献   

12.
13.
On the Effect of Redundancy for Systems with Dependent Components   总被引:1,自引:0,他引:1  
Parallel redundancy is a common approach to increase system reliability and mean time to failure. When studying systems with redundant components, it is usually assumed that the components are independent; however, this assumption is seldom valid in practice. In the case of dependent components, the effectiveness of adding a component may be quite different from the case of independent components. In this paper we investigate how the degree of correlation affects the increase in the mean lifetime for parallel redundancy when the two components are positively quadrant dependent. A number of bivariate distributions that can be used in the modeling of dependent components are compared. Various bounds are also derived. The results are useful in reliability analysis as well as for designers who are required to take into account the possible dependence among the components.  相似文献   

14.
A new methodology for the reliability optimization of a k dissimilar-unit nonrepairable cold-standby redundant system is introduced in this paper. Each unit is composed of a number of independent components with generalized Erlang distributions of lifetimes arranged in a series–parallel configuration. We also propose an approximate technique to extend the model to the general types of nonconstant hazard functions. To evaluate the system reliability, we apply the shortest path technique in stochastic networks. The purchase cost of each component is assumed to be an increasing function of its expected lifetime. There are multiple component choices with different distribution parameters available for replacement with each component of the system. The objective of the reliability optimization problem is to select the best components, from the set of available components, to be placed in the standby system to minimize the initial purchase cost of the system, maximize the system mean time to failure, minimize the system variance of time to failure, and also maximize the system reliability at the mission time. The goal attainment method is used to solve a discrete time approximation of the original problem.   相似文献   

15.
A new algorithm is proposed to approximate the terminal-pair network reliability based on minimal cut theory. Unlike many existing models that decompose the network into a series–parallel or parallel–series structure based on minimal cuts or minimal paths, the new model estimates the reliability by summing the linear and quadratic unreliability of each minimal cut set. Given component test data, the new model provides tight moment bounds for the network reliability estimate. Those moment bounds can be used to quantify the network estimation uncertainty propagating from component level estimates. Simulations and numerical examples show that the new model generally outperforms Esary-Proschan and Edge-Packing bounds, especially for high reliability systems.  相似文献   

16.
The situation in which test results are available on systems consisting of some of k independent but nonidentical components in series is studied. The underlying distribution of the lifetime of the component is assumed to be an exponential. The method of maximum likelihood is used to obtain estimates, a chi squared approximation is used to approximate the mean and variance of the maximum likelihood estimate, and a method for reducing its bias is presented.  相似文献   

17.
Systems, structures, and components of Nuclear Power Plants are subject to Technical Specifications (TSs) that establish operational limitations and maintenance and test requirements with the objective of keeping the risk associated to the plant within the limits imposed by the regulatory agencies. Recently, in an effort to improve the competitiveness of nuclear energy in a deregulated market, modifications to maintenance policies and TSs are being considered within a risk-informed viewpoint, which judges the effectiveness of a TS, e.g. a particular maintenance policy, with respect to its implications on the safety and economics of the system operation.In this regard, a recent policy statement of the US Nuclear Regulatory Commission declares appropriate the use of Probabilistic Risk Assessment models to evaluate the effects on the system of a particular TS. These models rely on a set of parameters at the component level (failure rates, repair rates, frequencies of failure on demand, human error rates, inspection durations, and others) whose values are typically affected by uncertainties. Thus, the estimate of the system performance parameters corresponding to a given TS value must be supported by some measure of the associated uncertainty.In this paper we propose an approach, based on the effective coupling of genetic algorithms and Monte Carlo simulation, for the multiobjective optimization of the TSs of nuclear safety systems. The method transparently and explicitly accounts for the uncertainties in the model parameters by attempting to minimize both the expected value of the system unavailability and its associated variance. The costs of the alternative TSs solutions are included as constraints in the optimization. An application to the Reactor Protection Instrumentation System of a Pressurized Water Reactor is demonstrated.  相似文献   

18.
For older water pipeline materials such as cast iron and asbestos cement, future pipe failure rates can be extrapolated from large volumes of existing historical failure data held by water utilities. However, for newer pipeline materials such as polyvinyl chloride (PVC), only limited failure data exists and confident forecasts of future pipe failures cannot be made from historical data alone. To solve this problem, this paper presents a physical probabilistic model, which has been developed to estimate failure rates in buried PVC pipelines as they age. The model assumes that under in-service operating conditions, crack initiation can occur from inherent defects located in the pipe wall. Linear elastic fracture mechanics theory is used to predict the time to brittle fracture for pipes with internal defects subjected to combined internal pressure and soil deflection loading together with through-wall residual stress. To include uncertainty in the failure process, inherent defect size is treated as a stochastic variable, and modelled with an appropriate probability distribution. Microscopic examination of fracture surfaces from field failures in Australian PVC pipes suggests that the 2-parameter Weibull distribution can be applied. Monte Carlo simulation is then used to estimate lifetime probability distributions for pipes with internal defects, subjected to typical operating conditions. As with inherent defect size, the 2-parameter Weibull distribution is shown to be appropriate to model uncertainty in predicted pipe lifetime. The Weibull hazard function for pipe lifetime is then used to estimate the expected failure rate (per pipe length/per year) as a function of pipe age. To validate the model, predicted failure rates are compared to aggregated failure data from 17 UK water utilities obtained from the United Kingdom Water Industry Research (UKWIR) National Mains Failure Database. In the absence of actual operating pressure data in the UKWIR database, typical values from Australian water utilities were assumed to apply. While the physical probabilistic failure model shows good agreement with data recorded by UK water utilities, actual operating pressures from the UK is required to complete the model validation.  相似文献   

19.
A simple sufficient condition is given for a system to have an increasing failure rate when the identical components comprising it have an increasing failure rate. Systems which function if and only if at least k of the n components function (“k out of n” systems) satisfy this condition. For systems of non-identical components, upper and lower bounds on failure rate are obtained in terms of component failure rates. These bounds are increasing functions of time for “k out of n” structures having components with increasing failure rates.  相似文献   

20.
Reliability approximation using finite Weibull mixture distributions   总被引:13,自引:3,他引:10  
The shape of measured or design life distributions of systems can vary considerably, and therefore frequently cannot be approximated by simple distribution functions. The scope of the paper is to prove that the reliability of an arbitrary system can be approximated well by a finite Weibull mixture with positive component weights only, without knowing the structure of the system, on condition that the unknown parameters of the mixture can be estimated. To support the main idea, five examples are presented. In order to estimate the unknown component parameters and the component weights of the Weibull mixture, some of the already existing methods are applied and the EM algorithm for the m-fold Weibull mixture is derived. The fitted distributions obtained by different methods are compared to the empirical ones by calculating the AIC and δC values. It can be concluded that the suggested Weibull mixture with an arbitrary but finite number of components is suitable for lifetime data approximation. For parameter estimation the combination of the alternative and EM algorithm is suggested.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号