首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 303 毫秒
1.
This work estimates component reliability from masked series-system life data, viz, data where the exact component causing system failure might be unknown. The authors extend the results of Usher and Hodgson (1988) by deriving exact maximum likelihood estimators (MLE) for the general case of a series system of three exponential components with independent masking. Their previous work shows that closed-form MLE are intractable, and they propose an iterative method for the solution of a system of three nonlinear likelihood equations  相似文献   

2.
Masked system life test data arises when the exact component which causes the system failure is unknown. Instead, it is assumed that there are two observable quantities for each system on the life test. These quantities are the system life time, and the set of components that contains the component leading to the system failure. The component leading to the system failure may be either completely unknown (general masking), isolated to a subset of system components (partial masking), or exactly known (no masking). In the dependent masked system life test data, it is assumed that the probability of masking may depend on the true cause of system failure. Masking is usually due to limited resources for diagnosing the cause of system failures, as well as the modular nature of the system. In this paper, we present point, and interval maximum likelihood, and Bayes estimators for the reliability measures of the individual components in a multi-component system in the presence of dependent masked system life test data. The life time distributions of the system components are assumed to be geometric with different parameters. Simulation study will be given in order to 1) compare the two procedures used to derive the estimators for the reliability measures of system components, 2) study the influence of the masking level on the accuracy of the estimators obtained, and 3) study the influence of the masking probability ratio on the accuracy of the estimators obtained  相似文献   

3.
A series system comprises n components arranged in decreasing order of likelihood of failure. The components are grouped into modules and, on system failure, the modules are replaced in sequence until the system is repaired. Thus, it is not known which component has failed, only which module contains the failed component. It is shown how, for a certain class of life-length distributions, prior information on the relative propensities of the components to fail can be incorporated to estimate the parameters of the life-length distributions of the components. The case where each component life-length is exponentially distributed is studied in detail  相似文献   

4.
We consider a parallel (1-out-of-n:G) system of n components with constant failure rates and treat three different classes of component testing procedures all of which guarantee that the given consumer and producer risks are not exceeded. It is necessary to impose certain restrictions on the magnitude of the unknown failure rates for guaranteeing the producer risk. The three classes of component test procedures use Type-I censoring and use decision rules based on: A) the total number of component failures during the testing periods, B) the number of failures for each individual component, and C) the maximum likelihood estimate of system reliability. Based on the requirement that both the consumer and producer risks lie within specified levels, class A plans exhibit lower testing costs in the selected numerical examples.  相似文献   

5.
Maximum likelihood predictive densities (MLPDs) for a future lognormal observation are obtained and their applications to reliability and life testing are considered. When applied to reliability and failure rate estimations, they give estimators that can be much less biased and less variable than the usual maximum likelihood estimations (MLEs) obtained by replacing the unknown parameters in the density function by their MLEs. When applied to lifetime predictions, they give prediction intervals that are shorter than the usual frequentist intervals. Using the MLPDs, it is also rather convenient to construct the shortest prediction intervals. Extensive simulations are performed for comparisons. A numerical example is given for illustration.  相似文献   

6.
A dynamic reliability problem is considered where system components are operating in time. A general framework for analyzing the relationship of prior information and variance of a Monte Carlo estimator is developed. The variance of an estimator based on less prior information is less than that of an estimator based on more prior information. The first application derives a sequential destruction method as a special case in this general framework. The method uses the order of component failure as prior information instead of the time to failure of components. The second application shows that the use of less prior information than the order of component failure can circumvent difficulties faced by a state transition method. A numerical example is presented  相似文献   

7.
A multicomponent series system includes a component which deteriorates over time, changing its operating characteristics and, consequently, increasing the failure rates of neighboring components. Preventive replacement of the deteriorating component can be beneficial. Replacement policies that include inspecting the deteriorating component at system failure instances and replacing it if the deterioration exceeds a critical level, or continuously monitoring the deteriorating component are considered. The system is modeled as a Markov chain solved by an efficient algorithm that exploits the system structure. For a two-component system, a closed-form equation gives the critical level for the minimum-average-cost failure-replacement policy. For the general case, replacement policies are evaluated by mean cost rate and by the ratio of the reduction in the number of failures to the number of preventive replacements  相似文献   

8.
Hsieh  J. Ucci  D.R. Kraft  G.D. 《Electronics letters》1989,25(23):1557-1558
A failing digital system may be represented by a multistate Markov diagram in which the system functions in a degraded mode of execution before total failure occurs. The models proposed to date do not consider more than one intermittent fault state, which limits the flexibility of modelling, nor do they take repair into consideration. In the letter the authors present a very general continuous-parameter Markov model at the processor level. They also show that this model can be generalised to evaluate the reliability and performability of multi-processor systems. A closed-form solution is also given for the above modelling.<>  相似文献   

9.
We develop parametric inferential methods for the competing risks problem where data arise due to multiple causes of failure in several groups with censoring and possibly missing causes. We provide the general likelihood method and the closed-form maximum-likelihood estimators for the exponential model. Parametric tests are given for comparing different causes and groups. An extensive numerical and graphical investigation is presented to substantiate the proposed methods. A real-data example is illustrated.  相似文献   

10.
The authors consider the problem of acceptance testing for a parallel (1-out-of-n:G) system of different components with constant failure rates. The components are individually tested and the tests are terminated as soon as a preassigned number of each component fails. The authors provide a criterion for accepting or rejecting the system based on the sum of the logarithms of the total times on test for each component. The critical level for the test statistic is chosen so as to guarantee that the specified consumer and producer risks on the system reliability are not exceeded. The use of this statistic makes the computation of these critical values much simpler as compared with that of a previously used statistic based on the product of the total times on test for each component. Several approximate procedures are considered for deriving these critical values. The authors also formulate the optimization problem for deriving the minimum-cost component-testing plans when a type-II censored component-test procedure is used for a parallel system  相似文献   

11.
A combined performance and reliability (performability) measure for gracefully degradable fault-tolerant systems is introduced and a closed-form, analytic solution is provided for computing the performability of a class of unrepairable systems which can be modeled by general acyclic Markov processes. This allows the study of models which consider the degradation of more than one type of system component, e.g. processors, memories, buses. An efficient evaluation algorithm is provided, with an extensive analysis of its time and space complexity. A numerical example is provided which shows how the combined performance/reliability measure provides for a complete evaluation of the relative merits of different multiprocessor structures  相似文献   

12.
In many systems which are composed of components with exponentially distributed lifetimes, the system failure time can be expressed as a sum of exponentially distributed random variables. A previous paper mentions that there seems to be no convenient closed-form expression for all cases of this problem. This is because in one case the expression involves high-order derivatives of products of multiple functions. The authors prove a simple intuitive multi-function generalization of the Leibnitz rule for high-order derivatives of products of two functions and use this to simplify this expression, thus giving a closed form solution for this problem. They similarly simplify the state-occupancy probabilities in general Markov models  相似文献   

13.
Digital computer techniques are developed using a) asymptotic distributions of maximum likelihood estimators, and b) a Monte Carlo technique, to obtain approximate system reliability s-confidence limits from component test data. 2-Parameter Weibull, gamma, and logistic distributions are used to model the component failures. The components can be arranged in any system configuration: series, parallel, bridge, etc., as long as one can write the equation for system reliability in terms of component reliability. Hypothetical networks of 3, 5, and 25 components are analyzed as examples. Univariate and bivariate asymptotic techniques are compared with a double Monte Carlo method. The bivariate asymptotic technique is shown to be fast and accurate. It can guide decisions during the research and development cycle prior to complete system testing and can be used to supplement system failure data.  相似文献   

14.
Analysts are often interested in obtaining component reliabilities by analyzing system-life data. This is generally done by making a series-system assumption and applying a competing-risks model. These estimates are useful because they reflect component reliabilities after assembly into an operational system under true operating conditions. The fact that most new systems under development contain a large proportion of old technology also supports the approach. In practice, however, this type of analysis is often confounded by the problem of masking (the exact cause of system failure is unknown). This paper derives a likelihood function for the masked-data case and presents an iterative procedure (IMLEP) for finding maximum likelihood estimates and confidence intervals of Weibull component life-distribution parameters. The approach is illustrated with a simple numerical example  相似文献   

15.
Little work has been done on extending existing models with imperfect debugging to the more realistic situation where new faults are generated from unsuccessful attempts at removing faults completely. This paper presents a software-reliability growth model which incorporates the possibility of introducing new faults into a software system due to the imperfect debugging of the original faults in the system. The original faults manifest themselves as primary failures and are assumed to be distributed as a nonhomogeneous Poisson process (NHPP). Imperfect debugging of each primary failure induces a secondary failure which is assumed to occur in a delayed sense from the occurrence time of the primary failure. The mean total number of failures, comprising the primary and secondary failures, is obtained. The authors also develop a cost model and consider some optimal release-policies based on the model. Parameters are estimated using maximum likelihood and a numerical example is presented  相似文献   

16.
We consider the problem of acceptance testing for a parallel (1-out-of-n:G) system of n different components with constant failure rates. The components are individually tested and the tests are terminated as soon as a preassigned number of each component fails. This paper provides a criterion for accepting or rejecting the system based on the product of the total times on test for each component. The critical level for the test statistic is chosen so as to guarantee that the specified levels of consumer and producer risks on the system reliability are not exceeded. If the testing costs depend on the number of each component tested, aminimum-cost procedure can be found from the feasible set of plans.  相似文献   

17.
Invariant subpixel material detection in hyperspectral imagery   总被引:2,自引:0,他引:2  
We present an algorithm for subpixel material detection in hyperspectral data that is invariant to the illumination and atmospheric conditions. The algorithm does not require atmospheric correction. The target material spectral reflectance is the only required prior information. A target material subspace model is constructed from the reflectance using a physical model and a background subspace model is estimated directly from the image. These two subspace models are used to compute maximum-likelihood estimates (MLEs) for the target material component and the background component at each image pixel. These estimates form the basis of a generalized likelihood ratio test for subpixel material detection. We present experimental results, using Hyperspectral Digital Imagery Collection Experiment (HYDICE) imagery, that demonstrate the utility of the algorithm for subpixel material detection under varying illumination and atmospheric conditions  相似文献   

18.
A Monte Carlo simulation algorithm for finding MTBF   总被引:1,自引:0,他引:1  
Prediction of mean time between failures (MTBF) is an important aspect of the initial stage of system development. It is often difficult to predict system MTBF during a given time since the component failure processes are extremely complex. The authors present a Monte Carlo simulation algorithm to calculate the MTBF during a given time of a binary coherent system. The algorithm requires the lifetime distributions of the components and the minimal path sets of the system. The MTBF for a specific time interval, e.g. a month or a year, can be estimated. If the component lifetime distributions are unknown, then a lower bound of system MTBF can be estimated by using known constant failure rates for each component  相似文献   

19.
We consider binary orthogonal signaling over a nonselective Rician-fading channel with additive white Gaussian noise. The received signal over such a channel may have both a specular component and a scatter (Rayleigh-faded) component. If there is only a scatter component, the noncoherent receiver is optimal. If there is only a specular component, the optimal receiver is the coherent receiver. In general, the optimal receiver for a Rician channel depends on the strengths of the two signal components and the noise density, and the set of possible optimal receivers is infinite. We consider a system in which the noncoherent receiver and the coherent receiver are employed in a parallel configuration for a symbol-by-symbol demodulation of the received signal. Each sequence of transmitted symbols produces a sequence at the output of each of the parallel receivers. The task of identifying which of these received sequences is a more reliable reproduction of the transmitted sequence is the data verification problem. In this paper, we show that data verification can be accomplished by combining side information from the demodulators with a suitable error-control coding scheme. The resulting system is a universal receiver that provides good performance over the entire range of channel parameters. In particular, the universal receiver performs better than the traditional noncoherent receiver  相似文献   

20.
Engineers often make changes during the development of a system in order to correct design weaknesses. If done well, this results in reliability growth (an increase in reliability and mean life) as development continues. The lifetime distribution at each stage in development is assumed to be gamma. Approximate maximum likelihood estimates (MLEs) of the parameters are obtained subject to the conditions that no parameter decreases in the next stage. An iterative procedure involving two constrained nonlinear optimization problems is proposed for obtaining the approximate MLEs. The constrained optimization problems can be computed using isotonic regression; the iterations converge rather quickly. The computations can be performed using common mathematical subroutine packages such as the International Mathematical and Statistical Libraries.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号