首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
This paper describes a method for estimating and forecasting reliability from attribute data, using the binomial model, when reliability requirements are very high and test data are limited. Integer data—specifically, numbers of failures — are converted into non-integer data. The rationale is that when engineering corrective action for a failure is implemented, the probability of recurrence of that failure is reduced; therefore, such failures should not be carried as full failures in subsequent reliability estimates. The reduced failure value for each failure mode is the upper limit on the probability of failure based on the number of successes after engineering corrective action has been implemented. Each failure value is less than one and diminishes as test programme successes continue. These numbers replace the integral numbers (of failures) in the binomial estimate. This method of reliability estimation was applied to attribute data from the life history of a previously tested system, and a reliability growth equation was fitted. It was then ‘calibrated’ for a current similar system's ultimate reliability requirements to provide a model for reliability growth over its entire life-cycle. By comparing current estimates of reliability with the expected value computed from the model, the forecast was obtained by extrapolation.  相似文献   

2.
In this paper, we propose an intuitive and practical method for system reliability analysis. Among the existing methods for system reliability analysis, reliability graph is particularly attractive due to its intuitiveness, even though it is not widely used for system reliability analysis. We provide an explanation for why it is not widely used, and propose a new method, named reliability graph with general gates, which is an extension of the conventional reliability graph. An evaluation method utilizing existing commercial or free software tools are also provided. We conclude that the proposed method is intuitive, easy-to-use, and practical while as powerful as fault tree analysis, which is currently the most widely used method for system reliability analysis.  相似文献   

3.
During early stages of product development process, a vast amount of knowledge and information is generated. However, most of it is subjective (imprecise) in nature and remains unutilized. This paper presents a formal structure for capturing this information and knowledge and utilizing it in reliability improvement estimation. The information is extracted as improvement indices from various design tools, experiments, and design review records and treated as fuzzy numbers or linguistic variables. Fuzzy reasoning method is used to combine and quantify the subjective information to map their impact on product reliability. The crisp output of the fuzzy reasoning process is treated as new evidence and incorporated into a Bayesian framework to update the reliability estimates. A case example is presented to demonstrate the proposed approach.  相似文献   

4.
This paper uses a Bayesian Belief Networks (BBN) methodology to model the reliability of Search And Rescue (SAR) operations within UK Coastguard (Maritime Rescue) coordination centres. This is an extension of earlier work, which investigated the rationale of the government's decision to close a number of coordination centres. The previous study made use of secondary data sources and employed a binary logistic regression methodology to support the analysis. This study focused on the collection of primary data through a structured elicitation process, which resulted in the construction of a BBN. The main findings of the study are that statistical analysis of secondary data can be used to complement BBNs. The former provided a more objective assessment of associations between variables, but was restricted in the level of detail that could be explicitly expressed within the model due to a lack of available data. The latter method provided a much more detailed model, but the validity of the numeric assessments was more questionable. Each method can be used to inform and defend the development of the other. The paper describes in detail the elicitation process employed to construct the BBN and reflects on the potential for bias.  相似文献   

5.
Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set.  相似文献   

6.
The development of complex systems involves a multi-tier supply chain, with each organisation allocated a reliability target for their sub-system or component part apportioned from system requirements. Agreements about targets are made early in the system lifecycle when considerable uncertainty exists about the design detail and potential failure modes. Hence resources required to achieve reliability are unpredictable. Some types of contracts provide incentives for organisations to negotiate targets so that system reliability requirements are met, but at minimum cost to the supply chain. This paper proposes a mechanism for deriving a fair price for trading reliability targets between suppliers using information gained about potential failure modes through development and the costs of activities required to generate such information. The approach is based upon Shapley's value and is illustrated through examples for a particular reliability growth model, and associated empirical cost model, developed for problems motivated by the aerospace industry. The paper aims to demonstrate the feasibility of the method and discuss how it could be extended to other reliability allocation models.  相似文献   

7.
For the last three decades, reliability growth has been studied to predict software reliability in the testing/debugging phase. Most of the models developed were based on the non-homogeneous Poisson process (NHPP), and S-shaped type or exponential-shaped type of behavior is usually assumed. Unfortunately, such models may be suitable only for particular software failure data, thus narrowing the scope of applications. Therefore, from the perspective of learning effects that can influence the process of software reliability growth, we considered that efficiency in testing/debugging concerned not only the ability of the testing staff but also the learning effect that comes from inspecting the testing/debugging codes. The proposed approach can reasonably describe the S-shaped and exponential-shaped types of behaviors simultaneously, and the results in the experiment show good fit. A comparative analysis to evaluate the effectiveness for the proposed model and other software failure models was also performed. Finally, an optimal software release policy is suggested.  相似文献   

8.
针对产品设计过程中的可靠性增长问题,提出把整个产品系统分为大样本子系统和小样本子系统,对于不同的子系统采用不同的可靠性增长方案和可靠性增长数学分析模型,充分挖掘所有的直接或问接数据信息,并利用到产品设计过程;建立产品的可靠性增长分析数据库系统,保存产品在设计过程中的数据,辅助同类产品设计.  相似文献   

9.
In this paper, we introduce a new reliability growth methodology for one-shot systems that is applicable to the case where all corrective actions are implemented at the end of the current test phase. The methodology consists of four model equations for assessing: expected reliability, the expected number of failure modes observed in testing, the expected probability of discovering new failure modes, and the expected portion of system unreliability associated with repeat failure modes. These model equations provide an analytical framework for which reliability practitioners can estimate reliability improvement, address goodness-of-fit concerns, quantify programmatic risk, and assess reliability maturity of one-shot systems. A numerical example is given to illustrate the value and utility of the presented approach. This methodology is useful to program managers and reliability practitioners interested in applying the techniques above in their reliability growth program.  相似文献   

10.
A generic method for estimating system reliability using Bayesian networks   总被引:2,自引:0,他引:2  
This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples.  相似文献   

11.
This paper shows the significant influence of climatic conditions on the reliability of components in telephone exchanges. The great number of observations made enabled numerous data to be collected. Sophisticated data processing was used using data analysis methods, in particular factor analysis. Prominence was thus given to the various parameters affecting the reliability of components (temperature, hygrometry, location of the board, air conditioning process…). A physical analysis of the failure of components complemented the statistical study. It resulted in taking practical measures to improve the field reliability of French telecommunication equipment.  相似文献   

12.
This paper considers a difficult but practical circumstance of civil infrastructure management—deterioration/failure data of the infrastructure system are absent while only condition-state data of its components are available. The goal is to develop a framework for estimating time-varying reliabilities of civil infrastructure facilities under such a circumstance. A novel method of analyzing time-varying condition-state data that only reports operational/non-operational status of the components is proposed to update the reliabilities of civil infrastructure facilities. The proposed method assumes that the degradation arrivals can be modeled as a Poisson process with unknown time-varying arrival rate and damage impact and that the target system can be represented as a fault-tree model. To accommodate large uncertainties, a Bayesian algorithm is proposed, and the reliability of the infrastructure system can be quickly updated based on the condition-state data. Use of the new method is demonstrated with a real-world example of hydraulic spillway gate system.  相似文献   

13.
Despite the recent revolution in statistical thinking and methodology. practical reliability analysis and assessment remains almost exclusively based on a black-box approach employing parametric statistical techniques and significance tests. Such practice, which is largely automatic for the industrial practitioner, implicity involves a large number of physically unreasonable assumptions that in practice are rarely met. Extensive investigation of reliability source data indicates a variety of differing data structures, which contradict the assumptions implicit in the usual methodology. As well as these, lack of homogeneity in the data, due, for instance, to multiple failure modes or misdefinition of environment, is commonly overlooked by the standard methodology. In this paper we argue the case for exploring reliability data. The pattern revealed by such exploration of a data set provides intrinsic information which helps to reinforce and reinterpret the engineering knowledge about the physical nature of the technological system to which the data refers. Employed in this way, the data analyst and the reliability engineer are partners in an iterative process aimed towards the greater understanding of the system and the process of failure. Despite current standard practice, the authors believe it to be critical that the structure of data analysis must reflect the structure in the failure data. Although standard methodology provides an easy and repeatable analysis, the authors' experience indicates that it is rarely an appropriate one. It is ironic that whereas methods to analyse the data structures commonly found in reliability data have been available for some time, the insistence about the standard black-box approach has prevented the identification of such ‘abnormal’ features in reliability data and the application of these approaches. We discuss simple graphical procedures to investigate the structure of reliability data, as well as more formal testing procedures which assist in decision-making methodology. Partial reviews of such methods have appeared previously and a more detailed development of the exploration approach and of the appropriate analysis it implies will be dealt with elsewhere. Here, our aim is to argue the case for the reliability analyst to LOOK AT THE DATA. and to analyse it accordingly.  相似文献   

14.
Reliability growth models are commonly used in the Department of Defense (DoD) to plan, track, and project reliability during system acquisition and testing. We describe two commonly used classes of reliability growth models for continuous failure time data and the metrics appropriate for their use. We also present two Bayesian reliability growth models that are based on the DoD models. The Bayesian models are easily interpretable in a statistical framework, which supports estimation and uncertainty quantification. Our goal is to provide a practical understanding of the development, implementation, and use of reliability growth models across a sequence of DoD testing events.  相似文献   

15.
Control charts are widely used for process monitoring in the manufacturing industry. Little research is available on their use to monitor the failure process of components or systems, which is important for equipment performance monitoring. Some Shewhart control charts, especially those for the number of defects, can be used for monitoring the number of failures per fixed interval; however, they are not effective especially when the failure frequency becomes small. A recent control scheme based on the cumulative quantity between observations of defects has been proposed which can be easily adopted to monitor the failure process for exponentially distributed inter-failure time. An investigation of its use for reliability monitoring is presented in this paper and the scheme can be easily extended to monitor inter-failure times that follow other distributions such as the Weibull distribution. Furthermore, the scheme is extended to the monitoring of time required to observe a fixed number of failures. The advantages of this scheme include the fact that the scheme does not require any subjective sample size, can be used for both high and low reliability items and can detect process improvement even in a high-reliability environment.  相似文献   

16.
Many software reliability growth models (SRGMs) based on a non-homogenous Poisson process (NHPP) have been developed with the assumption of a constant fault detection rate (FDR) and a fault detection process dependent only on the residual fault content. In this paper we develop a SRGM based on NHPP using a different approach for model development. Here, the fault detection process is dependent not only on the residual fault content, but also on the testing time. It incorporates a realistic situation encountered in software development where the fault detection rate is not constant over the entire testing process, but changes due to variations in resource allocation, defect density, running environment and testing strategy (called the change-point). Here, the FDR is defined as a function of testing time. The proposed model also incorporates the testing effort with the change-point concept which is useful in solving the problems of runaway software projects and provides the testing effort control technique and flexibility to project managers to obtain the desired reliability level. It utilizes failure data collected from software development projects to show its applicability and effectiveness. The statistical package for social sciences (SPSS) based on the least-squares method has been used to estimate unknown parameters. The mean squared error (MSE), relative predictive error (RPE), average mean squared error (AMSE) and the average relative predictive error (ARPE) have been used to validate the model. It is observed that the proposed model results are accurate, highly predictive and incorporate industrial software project concepts.  相似文献   

17.
Most of the models for software reliability analysis are based on reliability growth models which deal with the fault detection process. This is done either by assuming that faults are corrected immediately after being detected or the time to correct a fault is not counted. Some models have been developed to relax this assumption. However, unlike the fault‐detection process, few published data sets are available to support the modeling and analysis of both the fault detection and removal processes. In this paper, some useful approaches to the modeling of both software fault‐detection and fault‐correction processes are discussed. Further analysis on the software release time decision that incorporates both a fault‐detection model and fault‐correction model is also presented. This procedure is easy to use and useful for practical applications. The approach is illustrated with an actual set of data from a software development project. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

18.
This paper is the first of what is intended to be a series of papers which investigate the foundations of reliability theory, particularly when applied to the prediction process. It will contrast current reliability practice against those practices common in normal science and engineering. The claim will be made that in general the prediction process as used in reliability, when stripped of the mathematical embellishments, is no more than simple enumeration: a method long held by the philosophers of science to be unreliable and in general a poor basis on which to make predictions. This initial paper rejects the statistical method as being an insufficient basis for making predictions and claims that it is incapable of logically supporting its conclusions. Although no evidence is provided to substantiate this claim, a number of scientific methods, both of historical and present day importance, are briefly reviewed with which one can contrast the statistical method.  相似文献   

19.
Various schemes have been created for verifying that reliability is not degraded during production. These include the periodic performance of reliability tests during production, three versions of an all-equipment reliability test plan and Bayesian approaches. Each method has its drawbacks. The purpose of all of these is to verify that the production process is continuing to produce products of acceptable reliability, for which the long-existing tools of statistical process control are directly applicable and advantageous. A method of verifying production reliability based on the use of a control chart for failure rate is proposed as a better way than the current standards and alternatives discussed in this paper.  相似文献   

20.
This paper presents a method that will drastically reduce the calculation effort required to obtain quantitative safety and reliability assessments to determine safety integrity levels for applications in the process industry. The method described combines all benefits of Markov modeling with the practical benefits of reliability block diagrams.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号